Run Experiments with Code
As you progress from initial testing to systematic evaluation, you’ll want to run experiments to validate your application’s performance and behavior. Here are several ways to structure your experiments, starting from the simplest approaches and moving to more sophisticated implementations.
Experiments fit both into the initial prompt engineering and model selection phases of your app, as well as during application development time.
AI Engineers and data scientists can use experiments in notebooks or in simple applications to test out prompts or different models. AI Engineers can then add experiments into their production apps allowing these experiments to be run against complex applications or scenarios, including RAG and agentic flows.
Configure an LLM Integration
To calculate metrics, you will need to configure an integration with an LLM. Visit the relevant API platform to obtain an API key, then add it using the integrations page in the Galileo console.
Work with Prompts
The simplest way to get started with experimentation is by evaluating prompts directly against datasets. This is especially valuable during the initial prompt development and refinement phase, where you want to test different prompt variations. Assuming you’ve previously created a dataset, you can use the following code to run an experiment:
Run Experiments with Custom Functions
Once you’re comfortable with basic prompt testing, you might want to evaluate more complex parts of your app using your datasets. This approach is particularly useful when you have a generation function in your app that takes a set of inputs, which you can model with a dataset.
This example uses OpenAI as the LLM being evaluated, and for generating metrics.
Galileo is model-agnostic, and supports leading LLM providers including OpenAI, Azure OpenAI, Anthropic, and LLaMA.
Run Experiments Against Complex Code with Custom Functions
Custom functions can be as complex as required, including multiple steps, agents, RAG, and more. This means you can build experiments around an existing application, allowing you to run experiments against the full application you have built, using datasets to mimic user inputs.
For example, if you have a multi-agent LangGraph chatbot application, you can run an experiment against it using a dataset to define different user inputs, and log every stage in the agentic flow as part of that experiment.
To enable this, you will need to make some small changes to your application logic to handle the logging context from the experiment.
When functions in your application are run by the run_experiment
call, a logger is created by the experiment runner, and a trace is started. This logger can be passed through the application, accessed using the @log
decorator or by calling galileo_context.get_logger_instance()
.
You will need to change your code to use this instead of creating a new logger and starting a new trace.
Get an existing logger and check for an existing trace
The Galileo SDK maintains a context that tracks the current logger. You can get this logger with the following code:
If there isn’t a current logger, one will be created by this call, so this will always return a logger.
Once you have the logger, you can check for an existing trace by accessing the current parent trace from the logger. If this is not set, then there is no active trace.
Once you have this information, you can use it to decide to create a new trace in your application. If there is no parent trace, you can safely create a new one.
You can then safely call your code from the experiment runner as well as in your normal application logic. When called from the experiment runner, your traces will be logged to that experiment. When called from your application code, the traces will be logged as normal.
Using LangGraph agents in an experiment
When using LangGraph in Python, Galileo provides a callback class that handles creating a logger, starting a trace, logging spans, then concluding and flushing the trace. This behavior is inconsistent with that required for experiments, where the logger is created and trace started at the start of the experiment, and the logger is concluded and flushed at the end.
To work around this, you can tell the callback to not start or flush the trace by detecting if there is already an active trace. If there is, then don’t start a new trace or flush it on completion.
The easiest way to do this is to get the current logger from the Galileo context, and check to see if it contains a parent trace. If there is no parent trace, then it is a new logger instance and you can start and flush the trace. If there is a parent trace, then it is an existing logger created from the experiment, and you can create the callback setting parameters to not start or flush the trace.
This behavior is also useful if you are logging to an existing logger, such as when you want the LangGraph agent to only be a part of a larger trace.
Custom Function Logging Principles
There are a few important principles to understand when logging experiments in code.
- When running an experiment, a new logger is created for you and set in the Galileo context. If you create a new logger manually in the application code used in your experiment, this logger will not be used in the experiment.
- To access the logger to manually add traces inside the experiment code, you can call
galileo_context.get_logger_instance()
(Python) orgetLogger()
(TypeScript) to get the current logger. - To detect if there is an active trace, use the
current_parent()
(Python) orcurrentParent
(TypeScript) method on the logger. This will returnNone
/undefined
if there isn’t an active trace. - Be sure to handle cases in your application code where a logger is created or a trace is started, and make sure this doesn’t happen in an experiment, and the experiment logger and trace is used instead.
- Every row in a dataset is a new trace. If you create new traces manually, they will not be used.
- Do not conclude or flush the logger in your experiment, the experiment will do this for you.
Custom Dataset Evaluation
As your testing needs become more specific, you might need to work with custom or local datasets. This approach is perfect for focused testing of edge cases or when building up your test suite with specific scenarios:
Custom Metrics for Deep Analysis
For the most sophisticated level of testing, you might need to track specific aspects of your application’s behavior. Custom metrics provide the flexibility to define precisely what you want to measure, enabling deep analysis and targeted improvement:
Each of these experimentation approaches fits into different stages of your development and testing workflow. As you progress from simple prompt testing to sophisticated custom metrics, Galileo’s experimentation framework provides the tools you need to gather insights and improve your application’s performance at every level of complexity.
Experimenting with Agentic and RAG Applications
The experimentation framework extends naturally to more complex applications like agentic AI systems and RAG (Retrieval-Augmented Generation) applications. When working with agents, you can evaluate various aspects of their behavior, from decision-making capabilities to tool usage patterns. This is particularly valuable when testing how agents handle complex workflows, multi-step reasoning, or tool selection.
For RAG applications, experimentation helps validate both the retrieval and generation components of your system. You can assess the quality of retrieved context, measure response relevance, and ensure that your RAG pipeline maintains high accuracy across different types of queries. This is especially important when fine-tuning retrieval parameters or testing different reranking strategies.
The same experimentation patterns shown above apply to these more complex systems. You can use predefined datasets to benchmark performance, create custom datasets for specific edge cases, and define specialized metrics that capture the unique aspects of agent behavior or RAG performance. This systematic approach to testing helps ensure that your advanced AI applications maintain high quality and reliability in production environments.