Tutorial

Improve and evaluate your LLM applications with a few simple steps

Stop spending time on prompt iteration! Follow a few steps to efficiently uncover your optimized system prompt.

Want to integrate even quicker? Try out Farsight AI on a Colab notebook here.

Note: While you have the flexibility to assess the results of any Language Model (LLM), we specifically leverage OpenAI for the evaluation functions in Farsight AI. To utilize our package, you must have access to an OpenAI API Key.

Setup A Python Environment

Go to the root directory of your project and create a virtual environment (if you don't already have one). In the CLI, run:

python3 -m venv env
source venv/bin/activate

Installation

Install our library by running:

pip install farsightai

Suggested Starter Workflow

We suggest you start by generating a few system prompts via our generate prompts function, then start evaluating outputs using standard Farsight metrics. Follow the steps below:

  1. Generate several system prompts using our prompt generation functionality (we recommend starting with 5).

  2. Generate outputs using your preferred language model (e.g., Mistral, ChatGPT, Llama).

  3. Evaluate the results using our prompt evaluation function or any of our additional metrics.

We've provided an example of this suggested generation and evaluation process below.

Last updated