Overview
This guide shows you how to create a custom local metric in Python to use in an experiment. In this example, you will be creating a metric to rate the brevity (shortness) of an LLM’s response based on word count. The sample code to run the experiment will use OpenAI as an LLM. In this guide you will:Before you start
To complete this how-to, you will need:- An OpenAI API key
- A Galileo project
- Your Galileo API key
Install dependencies
To use Galileo, you need to install some package dependencies, and configure environment variables.Install Required Dependencies
Install the required dependencies for your app. Create a virtual environment using your preferred method, then install dependencies inside that environment:
Create your local metric
Create a scorer function
The Scorer Function assigns one of three ranks —
"Terse", "Temperate", or "Talkative", depending on how many words the model outputs. Add this code to your experiment.py file.Python
Create an aggregator function
Since our Scorer returns a single rank per record, the aggregator examines that rank and returns it — modifying it to flag overly long responses as
"Terrible". Add this code to your experiment.py file.Python
Prepare the experiment
For this example, we’ll ask the LLM to specify the continent of four countries, encouraging it to be succinct.Create a dataset
Create a dataset of inputs to the experiment by adding this code to your
experiment.py file.Python
Call the LLM
Next you need a custom function to be called by your experiment. Add this code to your
experiment.py file.Python
Run the experiment
Now your experiment is set up, you can run it to see the results of your local metric.Run the experiment code
Terminal
Terminal
