OpenTelemetry
The first step is to configure OpenTelemetry.1
Installation
Add the OpenTelemetry packages to your project:The
opentelemetry-api and opentelemetry-sdk packages provide the core OpenTelemetry functionality. The opentelemetry-exporter-otlp package enables sending traces to Galileo’s OTLP endpoint.2
Create environment variables for your Galileo settings
Set environment variables for your Galileo settings, for example in a
.env file:3
Get the authentication headers
The OTel headers is a dictionary containing your API key, project name, and Log stream name. The convention for OTel is to store this header as a single string of key=value pairs in the
OTEL_EXPORTER_OTLP_HEADERS environment variable.4
Get your endpoint
The OTel endpoint is different from Galileo’s regular API endpoint and is specifically designed to receive telemetry data in the OTLP format.If you are using app.galileo.ai, then the OTel endpoint is
https://api.galileo.ai/otel/traces.If you’re using a self-hosted Galileo deployment, replace the https://api.galileo.ai/otel/traces endpoint with your deployment URL. The format of this URL is based on your console URL, replacing console with api and appending /otel/traces.For example:- if your console URL is
https://console.galileo.example.com, the OTel endpoint would behttps://api.galileo.example.com/otel/traces - if your console URL is
https://console-galileo.apps.mycompany.com, the OTel endpoint would behttps://api-galileo.apps.mycompany.com/otel/traces
OTEL_EXPORTER_OTLP_ENDPOINT environment variable. For example:5
Create a span processor
You can now create the span processor. This will use an OTel exporter configured by loading the headers and endpoint from the environment variables.
6
Register the span processor
The span processor can now be registered with an OTel trace provider.
OpenInference
Now you can enable automatic tracing for your framework and LLM operations using OpenInference instrumentors. These add AI-specific semantic conventions to your traces. For example, to instrument LangChain and OpenAI start by adding the relevant OpenInference packages:- Automatic capture of LLM calls, token usage, and model performance metrics
- AI-specific span attributes like
gen_ai.request.model,gen_ai.response.content, andgen_ai.usage.* - Semantic conventions that make your traces more meaningful in Galileo’s dashboard
- Framework-specific instrumentation for LangGraph workflows and OpenAI API calls