Skip to main content
This guide explains how to integrate Galileo with OpenTelemetry and OpenInference for comprehensive observability and tracing of your AI/ML workflows using industry-standard tools.

OpenTelemetry

The first step is to configure OpenTelemetry.
1

Installation

Add the OpenTelemetry packages to your project:
pip install opentelemetry-api opentelemetry-sdk \
            opentelemetry-exporter-otlp
The opentelemetry-api and opentelemetry-sdk packages provide the core OpenTelemetry functionality. The opentelemetry-exporter-otlp package enables sending traces to Galileo’s OTLP endpoint.
2

Create environment variables for your Galileo settings

Set environment variables for your Galileo settings, for example in a .env file. These environment variables are consumed by the GalileoSpanProcessorto authenticate and route traces to the correct Galileo Project and Log stream:
# Your Galileo API key
GALILEO_API_KEY="your-galileo-api-key"

# Your Galileo project name
GALILEO_PROJECT="your-galileo-project-name"

# The name of the Log stream you want to use for logging
GALILEO_LOG_STREAM="your-galileo-log-stream "

# Provide the console url below if you are using a
# custom deployment, and not using the free tier, or app.galileo.ai.
# This will look something like “console.galileo.yourcompany.com”.
# GALILEO_CONSOLE_URL="your-galileo-console-url"
3

Self hosted deployments: Set the OTel endpoint

Skip this step if you are using Galileo Cloud.
The OTel endpoint is different from Galileo’s regular API endpoint and is specifically designed to receive telemetry data in the OTLP format.If you are using:
  • Galileo Cloud at app.galileo.ai, then you don’t need to provide a custom OTel endpoint. The default endpoint https://api.galileo.ai/otel/traces will be used automatically.
  • A self-hosted Galileo deployment, replace the https://api.galileo.ai/otel/traces endpoint with your deployment URL. The format of this URL is based on your console URL, replacing console with api and appending /otel/traces.
For example:
  • if your console URL is https://console.galileo.example.com, the OTel endpoint would be https://api.galileo.example.com/otel/traces
  • if your console URL is https://console-galileo.apps.mycompany.com, the OTel endpoint would be https://api-galileo.apps.mycompany.com/otel/traces
The convention is to store this in the OTEL_EXPORTER_OTLP_ENDPOINT environment variable. For example:
os.environ["OTEL_EXPORTER_OTLP_TRACES_ENDPOINT"] = \
    "https://api.galileo.ai/otel/traces"
4

Initialize and create the Galileo span processor

The GalileoSpanProcessor automatically configures authentication and metadata using your environment variables. It also:
  • Auto-builds OTLP headers using your Galileo credentials
  • Configures the correct OTLP trace endpoint
  • Registers a batch span processor that exports traces to Galileo
from galileo import otel  

# GalileoSpanProcessor (no manual OTLP config required) loads the env vars for 
# the Galileo API key, Project, and Log stream. Make sure to set them first. 
galileo_span_processor = otel.GalileoSpanProcessor(
    # Optional parameters if not set, uses env var
    # project=os.environ["GALILEO_PROJECT"], 
    # logstream=os.environ.get("GALILEO_LOG_STREAM"),  
)
5

Register the span processor

The span processor can now be registered with an OTel trace provider.
from opentelemetry.sdk import trace as trace_sdk

tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(span_processor)
You can now use this trace processor either using a framework that supports OTel directly, or via OpenInference.

OpenInference

Now you can enable automatic tracing for your framework and LLM operations using OpenInference instrumentors. These add AI-specific semantic conventions to your traces. For example, to instrument LangChain and OpenAI start by adding the relevant OpenInference packages:
pip install openinference-instrumentation-langchain \
            openinference-instrumentation-openai
Now you can add the instrumentation to your code, using the OTel trace provider.
from openinference.instrumentation.langgraph import (
    LangGraphInstrumentor
)
from openinference.instrumentation.openai import (
    OpenAIInstrumentor
)

LangGraphInstrumentor().instrument(tracer_provider=tracer_provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
OpenInference adds:
  • Automatic capture of LLM calls, token usage, and model performance metrics
  • AI-specific span attributes like gen_ai.request.model, gen_ai.response.content, and gen_ai.usage.*
  • Semantic conventions that make your traces more meaningful in Galileo’s dashboard
  • Framework-specific instrumentation for LangGraph workflows and OpenAI API calls
Once OpenTelemetry and OpenInference is set up your application will automatically capture and send observability data to Galileo with every run, providing complete traces of your AI workflows, detailed LLM call breakdowns, and performance insights organized by project and Log stream. For a detailed example of using OpenTelemetry and OpenInference with LangGraph, see the Log with OpenTelemetry, LangGraph, and OpenAI how-to guide.

Next steps

Learn how to integrate with some popular frameworks using OpenTelemetry and OpenInference.