Custom Metrics

Evaluate your LLM outputs with easy-to-implement custom metrics

Make sure you have your OpenAI API Key before you begin.

Custom Metrics

custom_metrics()

Quickly create bespoke metrics to evaluate outputs based on other specific criteria. Implement constraints and automatically check if your LLM outputs meet these constraints. Returns a list of booleans: one for each guideline / constraint you provide.

Param
Type
Description

output

str

the response from your LLM

constraints

List[str]

List of constraints you want to check your LLM output against

Output Type
Output Definition

List[str]

a list of booleans: one for each guideline you provide

"True" signifies that the output has violated the constraint

from farsightai import FarsightAI

# Replace with your openAI credentials
OPEN_AI_KEY = "<openai_key>"

query = "Who is the president of the United States"
farsight = FarsightAI(openai_key=OPEN_AI_KEY)

# Replace this with the actual output of your LLM application
output = "As of my last knowledge update in January 2022, Joe Biden is the President of the United States. However, keep in mind that my information might be outdated as my training data goes up to that time, and I do not have browsing capabilities to check for the most current information. Please verify with up-to-date sources."
# Replace this with the actual constraints you want to check your LLM output for
constraints = ["do not mention Joe Biden", "do not talk about alcohol"]

custom_metric = farsight.custom_metrics(constraints, output)

print("score: ", custom_metric)
# score:  [True, False]

Last updated