Evaluate your LLM outputs with easy-to-implement custom metrics
Make sure you have your OpenAI API Key before you begin.
Custom Metrics
custom_metrics()
Quickly create bespoke metrics to evaluate outputs based on other specific criteria. Implement constraints and automatically check if your LLM outputs meet these constraints. Returns a list of booleans: one for each guideline / constraint you provide.
from farsightai import FarsightAI# Replace with your openAI credentialsOPEN_AI_KEY ="<openai_key>"query ="Who is the president of the United States"farsight =FarsightAI(openai_key=OPEN_AI_KEY)# Replace this with the actual output of your LLM applicationoutput = "As of my last knowledge update in January 2022, Joe Biden is the President of the United States. However, keep in mind that my information might be outdated as my training data goes up to that time, and I do not have browsing capabilities to check for the most current information. Please verify with up-to-date sources."
# Replace this with the actual constraints you want to check your LLM output forconstraints = ["do not mention Joe Biden","do not talk about alcohol"]custom_metric = farsight.custom_metrics(constraints, output)print("score: ", custom_metric)# score: [True, False]