Automated System Prompt Optimization (OPRO)
Find your optimized system prompt automatically; removes the need for manual prompt iteration
Elevate your LLM application development process with Farsight OPRO: automated system prompt optimization. This library intelligently iterates through prompts to efficiently find the optimal one for any given LLM system built on OpenAI - the approach is based on the Google DeepMind paper 'Large Language Models as Optimizers'.
Simply provide a dataset with inputs and target outputs (min 3 pairs, though recommend at least 50), and Farsight OPRO will converge to the optimal prompt in one line of code.
Installation
Install our library by running:
pip install farsight-oproInstantiation
Begin using the SDK with the following few lines of code:
# with openai credentials
client = FarsightOPRO(openai_key="<openai_key>")
# with azure credentials
client = FarsightOPRO(
openai_key="<azure_openai_key>",
azure_endpoint="<azure_endpoint",
api_version="<api_version>",
model="<model_name>"
)Dataset Configuration
The Farsight OPRO library requires only a dataset from the user. It expects datasets in the form of a list of dictionaries, as illustrated below:
For those users that may not have a dataset easily available in this format, we recommend generating a synthetic dataset from ChatGPT to get started. Below is a suggested prompt that will quickly generate a dataset that meets the Farsight OPRO spec:
Example Usage
Last updated