Custom metrics allow you to define specific evaluation criteria for your LLM applications. Galileo supports two types of custom metrics:
  • Registered custom metrics: Server-side metrics that can be shared across your organization
  • Local metrics: Local metrics that run in your notebook environment

Registered custom metrics

Registered custom metrics run in Galileo’s backend environment and can be used across your organization.

Creating a registered custom metric

You can create a registered custom metric either through the Python SDK or directly in the Galileo UI. Let’s walk through the UI approach:
1

Navigate to the Metrics section

In the Galileo platform, go to the Metrics section and select the Create New Metric button in the top right corner.Create a new metric
2

Select the Code metric type

From the dialog that appears, choose the Code-powered metric type. This option allows you to write custom Python code to evaluate your LLM outputs.Select the Code metric type
3

Write your custom metric

Use the code editor to write your custom metric. The editor provides a template with the required functions and helpful comments to guide you.Code editorThe code editor allows you to write and test your metric directly in the browser. You’ll need to define at least the scorer_fn and aggregator_fn functions as described below.
4

Save your metric

After writing your custom metric code, select the Save button in the top right corner of the code editor. Your metric will be validated and, if there are no errors, it will be saved and become available for use across your organization.You can now select this metric when running evaluations.

1. the scorer function (scorer_fn)

This function evaluates individual responses and returns a score:
def scorer_fn(*,
              index: Union[int, str],
              node_input: str,
              node_output: str,
              **kwargs: Any) -> Union[float, int, bool, str, None]:
    # Your scoring logic here
    return score
The function must accept **kwargs to ensure forward compatibility. Here’s a complete example that measures the difference in length between the output and ground truth:
def scorer_fn(*,
              index: Union[int, str],
              node_input: str,
              node_output: str,
              node_name: Optional[str],
              node_type: Optional[str],
              node_id: Optional[UUID],
              tools: Optional[List[Dict[str, Any]]],
              dataset_variables: Dict[str, str],
              **kwargs: Any) -> Union[float, int, bool, str, None]:

    ground_truth = dataset_variables.get("target", "")  # Ground truth if provided
    return abs(len(node_output) - len(ground_truth))
Parameter details:
  • index: Row index in the dataset
  • node_input: Input to the node
  • node_output: Output from the node
  • node_name, node_type, node_id, tools: Workflow/chain-specific parameters
  • dataset_variables: Key-value pairs from the dataset (includes ground truth)

2. the aggregator function (aggregator_fn)

This function aggregates individual scores into summary metrics:
def aggregator_fn(*,
                 scores: List[Union[float, int, bool, str, None]]
                 ) -> Dict[str, Union[float, int, bool, str, None]]:
    # Your aggregation logic here
    return {
        "Metric Name 1": aggregated_value_1,
        "Metric Name 2": aggregated_value_2
    }

Optional functions

Score type function
def score_type() -> Type[float] | Type[int] | Type[str] | Type[bool]:
    return float  # Or int, str, bool
This function defines the return type of your scorer (default is float).
Node type restriction
def scoreable_node_types_fn() -> List[str]:
    return ["llm", "chat"]  # Default
This function restricts which node types your scorer can evaluate. For example, to only score retriever nodes:
def scoreable_node_types_fn() -> List[str]:
    return ["retriever"]
LLM credentials access
To access LLM credentials during scorer execution:
include_llm_credentials = True  # Default is False
When enabled, credentials are passed to scorer_fn as a dictionary:
{
  "openai": {
    "api_key": "sk-...",
    "organization": "org-..."
  }
}

Complete example: response length scorer

Let’s create a custom metric that measures response length:
from typing import List, Dict, Type

def scorer_fn(*, node_output: str, **kwargs) -> int:
    return len(node_output)

def aggregator_fn(*, scores: List[int]) -> Dict[str, int]:
    return {
        "Total Response Length": sum(scores),
        "Average Response Length": sum(scores) / len(scores) if scores else 0,
    }

# Correct way to define score_type
def score_type():
    return int

def scoreable_node_types_fn() -> List[str]:
    return ["llm", "chat"]

Execution environment

Registered custom metrics run in a Python 3.10 environment with these libraries:
numpy~=1.26.4
pandas~=2.2.2
pydantic~=2.7.1
scikit-learn~=1.4.2
tensorflow~=2.16.1
networkx
openai
We provide advance notice before major version updates to these libraries.

Local metrics

A Local Metric (or Local scorer) is a custom metric that you can attach to an experiment — just like a Galileo preset metric. The key difference is that a Local Metric lives in code on your machine, so you share it by sharing your code. Local Metrics are ideal for running isolated tests and refining outcomes when you need more control than built-in metrics offer. You can also use any library or custom Python code with your local metrics, including calling out to LLMs or other APIs.
Galileo currently only supports Local scorers in Python

Local scorer components

A Local scorer consists of three main parts:
  1. Scorer Function Receives a single Span or Trace containing the LLM input and output, and computes a score. The exact measurement is up to you — for example, you might measure the length of the output or rate it based on the presence/absence of specific words.
  2. Aggregator Function Aggregates the scores generated by the Scorer Function and returns a final metric value. This function receives a list of the type returned by your Scorer. For instance, if your Scorer returns a str, the Aggregator will be called with a list[str]. The Aggregator’s return value can also be any type (e.g., str, bool, int), depending on how you want to represent the final metric.
  3. LocalMetricConfig[type] A typed callable provided by Galileo’s Python SDK that combines your Scorer and Aggregator into a custom metric.
    • The generic type should match the type returned by your Aggregator.
    • Example: If your Scorer returns bool values, you would use LocalMetricConfig[bool](…), and your Aggregator must accept a list[bool] and return a bool.
Scorer and Aggregator functions can be simple lambdas when your logic is straightforward. Local Metrics let you tailor evaluation to your exact needs by defining custom scoring logic in code. Whether you want to measure response brevity, detect specific keywords, or implement a complex scoring algorithm, Local Metrics integrate seamlessly with Galileo’s experimentation framework. Once you’ve defined your Scorer and Aggregator functions and wrapped them in a LocalMetricConfig, running the experiment is as simple as calling run_experiment. The results appear alongside Galileo’s built-in metrics, so you can compare, visualize, and analyze everything in one place. With Local Metrics, you have full control over how you measure LLM behavior—unlocking deeper insights and more targeted evaluations for your AI applications.

Create a local metric

Learn how to create a local metric in Python to use in your experiments

Comparison: registered custom metrics vs. local metrics

FeatureRegistered Custom MetricsLocal Metrics
CreationPython client, activated via UIPython client only
SharingOrganization-wideCurrent project only
EnvironmentServer-sideLocal Python environment
LibrariesLimited to Galileo environmentAny available library
ResourcesRestricted by GalileoLocal resources

Common use cases

Custom metrics are ideal for:
  • Heuristic evaluation: Checking for specific patterns, keywords, or structural elements
  • Model-guided evaluation: Using pre-trained models to detect entities or LLMs to grade outputs
  • Business-specific metrics: Measuring domain-specific quality indicators
  • Comparative analysis: Comparing outputs against ground truth or reference data

Simple example: sentiment scorer

Here’s a simple custom metric that evaluates the sentiment of responses:
# sentiment_scorer.py
from typing import Dict, List, Union, Type

def scorer_fn(*, node_output: str, **kwargs) -> float:
    """
    A simple sentiment scorer that counts positive and negative words.
    Returns a score between -1 (negative) and 1 (positive).
    """
    positive_words = [
        "good", "great", "excellent", 
        "positive", "happy", "best", "wonderful"
    ]
    negative_words = [
        "bad", "poor", "negative", "terrible", "worst", "awful", "horrible"
    ]

    # Convert to lowercase for case-insensitive matching
    text = node_output.lower()

    # Count occurrences
    positive_count = sum(text.count(word) for word in positive_words)
    negative_count = sum(text.count(word) for word in negative_words)

    total_count = positive_count + negative_count

    # Calculate sentiment score
    if total_count == 0:
        return 0.0  # Neutral

    return (positive_count - negative_count) / total_count

def aggregator_fn(*, scores: List[float]) -> Dict[str, float]:
    """Aggregate sentiment scores across responses."""
    if not scores:
        return {"Average Sentiment": 0.0}

    avg_sentiment = sum(scores) / len(scores)

    return {
        "Average Sentiment": round(avg_sentiment, 2),
        "Positive Responses": sum(1 for score in scores if score > 0.2),
        "Neutral Responses": sum(1 for score in scores if -0.2 <= score <= 0.2),
        "Negative Responses": sum(1 for score in scores if score < -0.2)
    }

# Correct way to define score_type
def score_type():
    return float
This simple sentiment scorer:
  • Counts positive and negative words in responses
  • Calculates a sentiment score between -1 (negative) and 1 (positive)
  • Aggregates results to show the distribution of positive, neutral, and negative responses
You can easily extend this with more sophisticated sentiment analysis techniques or domain-specific terminology.

Next steps

Create a local metric

Learn how to create a local metric in Python to use in your experiments