Overview
The Python SDK allows you to log all prompts, responses, and statistics around your LLM usage. There are three main ways to log your application:
- Using a wrapper (Recommended) - Instead of importing common LLMs like
openai
, use Galileo’s wrapper which automatically logs everything, no other code changes required! - Using a decorator - By decorating a function that calls an LLM with the
@log
decorator, the Galileo SDK logs all AI prompts within. - Directly using the
GalileoLogger
class (Manual) - As a last resort, you can directly use the base class, but this requires calling multiple methods per LLM call.
Regardless of how you go about logging your AI application, you will still need to initialize your API keys and install the Galileo SDK by following the steps below.
Key Concepts
Throughout this reference guide there are several ideas which will be used extensively.
- Project - All logs are stored within a project in Galileo. You can create and manage your projects using the Galileo UI.
- Log Streams - Log streams are a way to organize logs in Galileo. You can create and manage your log streams using the Galileo UI.
- Traces - These track a collection of Logs which represent a “single response”. For multi-step LLM calls, this helps debug how the response was built, and where issues may have occurred.
- Spans - Spans are a single step in a trace. They can be a
workflow
if they contain multiple sub-spans,llm
for a step invoking an LLM call,retriever
for when you retrieve data, ortool
for agentic tool calls.
As your application runs, it will stream logs back to Galileo in a series of traces that then get analyzed using Metrics you set up. Traces that seem problematic can then be reviewed step by step to determine what part of the pipeline needs changing, or if the Metrics need tweaking.
Installation
Install Galileo’s Python SDK to your project by running:
Initialization/Authentication
Create or update a .env
file with the following values:
Logging
Using LLM Wrappers
The simplest way to get started is to use our OpenAI client wrapper. This example will automatically produce a single-span trace in the Logstream UI:
Using the @log
Decorator
The @log
decorator is used to capture the inputs and outputs of a function as a span. By default, a workflow span is created when span_type isn’t specified. Here are the different span types:
- Workflow: A span that can have child spans, useful for nesting several child spans to denote a thread within a trace. If you add the
@log
decorator to a parent method, calls that are made within that scope are automatically logged in the same trace. - Llm: Captures the input, output, and settings of an LLM call. This span gets automatically created when our OpenAI client library wrapper is used. Cannot have nested children.
- Retriever: Contains the output documents of a retrieval operation.
- Tool: Captures the input and output of a tool call. Used to decorate functions that are invoked as tools.
This example will create a trace with a workflow span and two nested llm spans:
Here’s how to create a retriever span using the decorator:
Here’s how to create a tool span using the decorator:
For more detailed information and examples, see the @log Decorator documentation.
Pure invocation using the GalileoLogger
This is the most verbose way to log your application. It requires manually calling the GalileoLogger
class and adding spans to the trace. We recommend using the other two methods whenever possible.
Grouping and Uploading Logs Faster: galileo_context()
Regardless of the method you use to add logs, the Galileo context manager can be useful for a few things:
- Automatically starting a trace and ensuring anything that happens in its scope is logged as a span within the trace.
- For long running app runtimes like Streamlit, the request never terminates. You can use the context manager to start a trace and ensure that traces are flushed when the manager exits.
- You might want to route a part of your app to a different Project or Log Stream. You can use the context manager to set the trace scope.
Using the context manager to create a trace with a nested LLM span (which is automatically flushed when the manager exits):
In some cases (like long-running processes), it may be necessary to explicitly flush the trace to upload it to Galileo:
Additional Documentation
For more detailed information on specific topics, please refer to the following pages:
- OpenAI Wrapper - Using the OpenAI wrapper for automatic logging
- Langchain Integration - Learn how to integrate Galileo with Langchain
- Prompts - Creating and using prompt templates
- Experimentation - Running experiments and evaluations
- Datasets - Working with datasets in Galileo