Overview
This guide shows you how to log spans to Galileo using theGalileoLogger.
The Galileo wrappers and @log decorator are the preferred way to create log traces and spans, but there are times when you need to manually create a trace or a span to give you more granular control over the data you are logging.
This guide shows how to manually log a number of spans when calling an LLM. This pattern can be used for times with the wrapper or decorator are not applicable, such as when using a unsupported LLM SDK such as the Azure AI inference SDK. You will be using OpenAI for this example.
In this guide you will:
- Create a basic app to call OpenAI
- Create a new Galileo logger to log traces and spans to
- Add spans
- Add more details to the trace
Before you start
To complete this how-to, you will need:- An OpenAI API key
- A Galileo project configured
- Your Galileo API key
Install dependencies
To use Galileo, you need to install some package dependencies, and configure environment variables.Install Required Dependencies
Create a .env file, and add the following values
Create a basic app to call OpenAI
Create a file for your application called app.py or app.ts.
Add the following code to call OpenAI to ask a question
package.json file:Run the app to ensure everything is working
Create a new Galileo logger to log traces and spans to
If you are using the Galileo wrappers or decorators, Galileo automatically create new logging sessions, traces, and spans for you. Seeing as you are doing everything manually, you will need to create a new logger.Import the Galileo logger
Create a logger instance
main function. In TypeScript, do this before the call to promptOpenAI.GALILEO_PROJECT and GALILEO_LOG_STREAM environment variables. You can override these if required by setting the relevant constructor parameters.Create a new trace
Conclude and flush the logger
main function in Python, or the end of the app.ts file in TypeScript:Run the app to log the trace
View the logged trace

View details of the trace

Add spans
The next step is to add spans to the trace. You will be adding an LLM span. LLM spans can contain a range of details about the LLM call. As well as the input and output text, you can add properties such as the number of tokens used, temperature, and more.Pass the logger to the prompt OpenAI function
main function to pass the logger:Log the LLM span
Run the app to log the trace
View the logged trace

Add more details to the trace
Now you have a span, you can add more details to both the span and the trace, such as duration to help measure latency in your app, and number of tokens to help understand usage and cost.Add the number of tokens to the LLM span
Add the duration of the LLM call
time to the top of the file:Add the total duration
logger.conclude call:Run the app to log the trace
View the details of the logged trace

