Skip to main content
This guide explains how to integrate Galileo with OpenTelemetry and OpenInference for comprehensive observability and tracing of your AI/ML workflows using industry-standard tools.

OpenTelemetry

The first step is to configure OpenTelemetry.
1

Installation

Add the OpenTelemetry packages to your project:
pip install opentelemetry-api opentelemetry-sdk \
            opentelemetry-exporter-otlp
The opentelemetry-api and opentelemetry-sdk packages provide the core OpenTelemetry functionality. The opentelemetry-exporter-otlp package enables sending traces to Galileo’s OTLP endpoint.
2

Create environment variables for your Galileo settings

Set environment variables for your Galileo settings, for example in a .env file:
# Your Galileo API key
GALILEO_API_KEY="your-galileo-api-key"

# Your Galileo project name
GALILEO_PROJECT="your-galileo-project-name"

# The name of the Log stream you want to use for logging
GALILEO_LOG_STREAM="your-galileo-log-stream "
3

Get the authentication headers

The OTel headers is a dictionary containing your API key, project name, and Log stream name. The convention for OTel is to store this header as a single string of key=value pairs in the OTEL_EXPORTER_OTLP_HEADERS environment variable.
import os

# Create a dictionary of headers
headers = {
    "Galileo-API-Key": os.environ.get("GALILEO_API_KEY"),
    "project": os.environ.get("GALILEO_PROJECT"),
    "logstream": os.environ.get("GALILEO_LOG_STREAM", "default"),
}

# Set this as an environment variable
os.environ["OTEL_EXPORTER_OTLP_TRACES_HEADERS"] = ",".join(
    [f"{k}={v}" for k, v in headers.items()]
)
4

Get your endpoint

The OTel endpoint is different from Galileo’s regular API endpoint and is specifically designed to receive telemetry data in the OTLP format.If you are using app.galileo.ai, then the OTel endpoint is https://api.galileo.ai/otel/traces.If you’re using a self-hosted Galileo deployment, replace the https://api.galileo.ai/otel/traces endpoint with your deployment URL. The format of this URL is based on your console URL, replacing console with api and appending /otel/traces.For example:
  • if your console URL is https://console.galileo.example.com, the OTel endpoint would be https://api.galileo.example.com/otel/traces
  • if your console URL is https://console-galileo.apps.mycompany.com, the OTel endpoint would be https://api-galileo.apps.mycompany.com/otel/traces
The convention is to store this in the OTEL_EXPORTER_OTLP_ENDPOINT environment variable. For example:
os.environ["OTEL_EXPORTER_OTLP_TRACES_ENDPOINT"] = \
    "https://api.galileo.ai/otel/traces"
5

Create a span processor

You can now create the span processor. This will use an OTel exporter configured by loading the headers and endpoint from the environment variables.
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
    OTLPSpanExporter
)
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Create a span process
span_processor = BatchSpanProcessor(OTLPSpanExporter())
6

Register the span processor

The span processor can now be registered with an OTel trace provider.
from opentelemetry.sdk import trace as trace_sdk

tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(span_processor)
You can now use this trace processor either using a framework that supports OTel directly, or via OpenInference.

OpenInference

Now you can enable automatic tracing for your framework and LLM operations using OpenInference instrumentors. These add AI-specific semantic conventions to your traces. For example, to instrument LangChain and OpenAI start by adding the relevant OpenInference packages:
pip install openinference-instrumentation-langchain \
            openinference-instrumentation-openai
Now you can add the instrumentors to your code, using the OTel trace provider.
from openinference.instrumentation.langgraph import (
    LangGraphInstrumentor
)
from openinference.instrumentation.openai import (
    OpenAIInstrumentor
)

LangGraphInstrumentor().instrument(tracer_provider=tracer_provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
OpenInference adds:
  • Automatic capture of LLM calls, token usage, and model performance metrics
  • AI-specific span attributes like gen_ai.request.model, gen_ai.response.content, and gen_ai.usage.*
  • Semantic conventions that make your traces more meaningful in Galileo’s dashboard
  • Framework-specific instrumentation for LangGraph workflows and OpenAI API calls
Once OpenTelemetry and OpenInference is set up your application will automatically capture and send observability data to Galileo with every run, providing complete traces of your AI workflows, detailed LLM call breakdowns, and performance insights organized by project and Log stream.

Next steps

Learn how to integrate with some popular frameworks using OpenTelemetry and OpenInference.