OpenTelemetry
The first step is to configure OpenTelemetry.1
Installation
Add the OpenTelemetry packages to your project:The
opentelemetry-api and opentelemetry-sdk packages provide the core OpenTelemetry functionality. The opentelemetry-exporter-otlp package enables sending traces to Galileo’s OTLP endpoint.2
Create environment variables for your Galileo settings
Set environment variables for your Galileo settings, for example in a
.env file.
These environment variables are consumed by the GalileoSpanProcessorto authenticate
and route traces to the correct Galileo Project and Log stream:3
Self hosted deployments: Set the OTel endpoint
Skip this step if you are using Galileo Cloud.
-
Galileo Cloud at app.galileo.ai, then you don’t need to provide a custom OTel endpoint.
The default endpoint
https://api.galileo.ai/otel/traceswill be used automatically. -
A self-hosted Galileo deployment, replace the
https://api.galileo.ai/otel/tracesendpoint with your deployment URL. The format of this URL is based on your console URL, replacingconsolewithapiand appending/otel/traces.
- if your console URL is
https://console.galileo.example.com, the OTel endpoint would behttps://api.galileo.example.com/otel/traces - if your console URL is
https://console-galileo.apps.mycompany.com, the OTel endpoint would behttps://api-galileo.apps.mycompany.com/otel/traces
OTEL_EXPORTER_OTLP_ENDPOINT environment variable. For example:4
Initialize and create the Galileo span processor
The
GalileoSpanProcessor automatically configures authentication
and metadata using your environment variables. It also:- Auto-builds OTLP headers using your Galileo credentials
- Configures the correct OTLP trace endpoint
- Registers a batch span processor that exports traces to Galileo
5
Register the span processor
The span processor can now be registered with an OTel trace provider.
OpenInference
Now you can enable automatic tracing for your framework and LLM operations using OpenInference instrumentors. These add AI-specific semantic conventions to your traces. For example, to instrument LangChain and OpenAI start by adding the relevant OpenInference packages:- Automatic capture of LLM calls, token usage, and model performance metrics
- AI-specific span attributes like
gen_ai.request.model,gen_ai.response.content, andgen_ai.usage.* - Semantic conventions that make your traces more meaningful in Galileo’s dashboard
- Framework-specific instrumentation for LangGraph workflows and OpenAI API calls