Skip to main content

Overview

This guide walks you through running a LangGraph app with OpenAI, OTel, and OpenInference. In this guide, you’ll instrument workflows to capture your AI traces and LLM calls. OpenTelemetry provides industry-standard observability, while OpenInference extends it with AI-specific semantic conventions. Together, they give you complete visibility into your LangGraph workflows, LLM calls, and performance metrics.

In this guide you will

Before you start

Below, you’ll find instructions on the key parts that come into play when using OpenTelemetry for observability.

Set up your LangGraph app with OpenTelemetry

For this how-to guide we’ll assume that you have some familiarity with LangGraph, as well as some familiarity with basic observability principles. If you’d like to see a full guide of this LangGraph application, check out the full cookbook. Below, you’ll find instructions on the key parts that come into play when using OpenTelemetry for observability, as well as the key parts to on how to integrate existing OTel compatible applications with Galileo.
1

Install required dependencies

First, you’ll have to install any additional OTel-required dependencies to the application.
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp \
  openinference-instrumentation-langchain openinference-instrumentation-openai \
  langgraph openai python-dotenv
2

Configure OTel endpoints

In order for OTel logs to properly send to Galileo, you’ll need to configure the proper OTel endpoint. The OTel endpoint is different from Galileo’s regular API endpoint, and is specifically designed to receive telemetry data in the OTLP format.This is handled by the OTLPSpanExporter which is the component responsible for taking your traces and sending them to Galileo, converting them to Galileo in the process. These items are then batched to optimize performance.
If you’re using a self-hosted or Galileo deployment other than app.galileo.ai, be sure to replaceapp.galileo.ai with your custom deployment URL.
# Galileo's OpenTelemetry endpoint
endpoint = "https://app.galileo.ai/api/galileo/otel/traces"

# Configure OTLP exporter
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor

exporter = OTLPSpanExporter(endpoint=endpoint)
span_processor = BatchSpanProcessor(exporter)
3

Create headers for OTel

Before any data is sent to Galileo, the application must ensure that the data is headed to the right place, and in a secure manner. OpenTelemetry has specific requirements for how authentication and metadata needs to be formatted.OpenTelemetry requires headers to be set in the OTEL_EXPORTER_OTLP_TRACES_HEADERS environment variable.
import os

# Standard dictionary format (what you might expect)
headers = {
    "Galileo-API-Key": os.environ.get("GALILEO_API_KEY"),
    "project": os.environ.get("GALILEO_PROJECT"),
    "logstream": os.environ.get("GALILEO_LOG_STREAM", "default"),
}

# OpenTelemetry requires headers in this specific format
os.environ["OTEL_EXPORTER_OTLP_TRACES_HEADERS"] = ",".join(
    [f"{k}={v}" for k, v in headers.items()]
)
print(f"OTEL Headers: {os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS']}")
4

Configure OpenTelemetry tracing

We’ll now need to assemble the complete OTel system together by creating a tracer provider, configuring it with your service metadata and the setting up a pipeline that batches and exports the traces to the OTel endpoint.Without this OpenTelemetry has no way of knowing where to send your traces or how to identify your service.
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry import trace as trace_api
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource

# Set up headers (as shown above)
headers = {
    "Galileo-API-Key": os.environ.get("GALILEO_API_KEY"),
    "project": os.environ.get("GALILEO_PROJECT"),
    "logstream": os.environ.get("GALILEO_LOG_STREAM", "default"),
}
os.environ["OTEL_EXPORTER_OTLP_TRACES_HEADERS"] = ",".join(
    [f"{k}={v}" for k, v in headers.items()]
)

# Configure OpenTelemetry
endpoint = "https://app.galileo.ai/api/galileo/otel/traces"
resource = Resource.create({
    "service.name": "your-service-name",
    "service.version": "1.0.0",
})

# Create tracer provider
tracer_provider = trace_sdk.TracerProvider(resource=resource)

# Add span processor for Galileo
tracer_provider.add_span_processor(
    BatchSpanProcessor(OTLPSpanExporter(endpoint=endpoint))
)
trace_api.set_tracer_provider(tracer_provider=tracer_provider)
5

Apply OpenInference instrumentation

Enable automatic AI observability by applying OpenInference instrumentors. These automatically capture LLM calls, token usage, and model performance without requiring changes to your existing code.
from openinference.instrumentation.langchain import LangChainInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

LangChainInstrumentor().instrument(tracer_provider=tracer_provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
What this enables automatically:
  • LangGraph operations and OpenAI API calls are traced
  • Token usage and model information is captured
  • Performance metrics and errors are recorded
6

Viewing your traces in Galileo

Once your application is running with OpenTelemetry configured, you can view your traces in the Galileo dashboard. Navigate to your project and Log stream to see the complete trace graph showing your LangGraph workflow execution.Galileo dashboard showing OpenTelemetry trace graph view with LangGraph workflow spansThe trace graph displays:
  • Workflow spans showing the execution flow through your LangGraph nodes
  • LLM call details with token usage and model information
  • Performance metrics including timing and resource utilization
  • Error tracking if any issues occur during execution

Run your application with OpenTelemetry

With OpenTelemetry correctly configured, your application will now automatically capture and send observability data to Galileo with every run. You’ll see complete traces of your LangGraph workflows, detailed LLM call breakdowns with token counts, and performance insights organized by project and Log stream in your Galileo dashboard. This provides consistent, well-structured logging across all your AI applications without requiring additional code changes, enabling effective monitoring, debugging, and optimization at scale. Galileo dashboard showing OpenTelemetry conversation view with detailed LLM call breakdowns and token usage

OpenInference semantic conventions for LangGraph

When running your LangGraph app with OpenInference, Galileo automatically applies semantic conventions to your traces, capturing model information, token usage, and performance metrics without any additional code. For advanced use cases, you can also manually add custom attributes to enhance your traces with domain-specific information:
1

Span attributes

# Model information
span.set_attribute("gen_ai.system", "openai")
span.set_attribute("gen_ai.request.model", "gpt-4")
span.set_attribute("gen_ai.request.prompt", user_prompt)

# Response information
span.set_attribute("gen_ai.response.model", "gpt-4")
span.set_attribute("gen_ai.response.content", ai_response)

# Token usage
span.set_attribute("gen_ai.usage.prompt_tokens", 150)
span.set_attribute("gen_ai.usage.completion_tokens", 75)
span.set_attribute("gen_ai.usage.total_tokens", 225)
2

Events

# Add events to spans for additional context
span.add_event("model.loaded", {
    "model.name": "gpt-4",
    "model.size": "1.7T",
    "load.time_ms": 2500
})

span.add_event("inference.started", {
    "batch.size": 1,
    "max.tokens": 1000
})

span.add_event("inference.completed", {
    "duration.ms": 1250,
    "tokens.generated": 75
})

Troubleshooting your LangGraph app

Here are some common troubleshooting steps when using OpenTelemetry and OpenInference.

Headers not formatted correctly

Not seeing your OTel traces in Galileo? Double checker your header formatting. OpenTelemetry requires headers in a specific comma-separated string format, not as a dictionary.
# ❌ Wrong - dictionary format won't work with OTel
headers = {"Galileo-API-Key": "your-key", "project": "your-project"}

# ✅ Correct - must be comma-separated string format
os.environ["OTEL_EXPORTER_OTLP_TRACES_HEADERS"] = \
    "Galileo-API-Key=your-key,project=your-project,logstream=default"

Wrong endpoint

# ❌ Wrong - this is the native SDK endpoint
endpoint = "https://api.galileo.ai/v2/otlp"

# ✅ Correct - this is the OTel endpoint
endpoint = "https://app.galileo.ai/api/galileo/otel/traces"

Console URL incorrect

For custom Galileo deployments, replace app.galileo.ai with your deployment URL.
# ❌ Wrong - using default URL for custom deployment
endpoint = "https://app.galileo.ai/api/galileo/otel/traces"

# ✅ Correct - using your custom deployment URL
endpoint = "https://your-custom-domain.com/api/galileo/otel/traces"

Missing LangGraph instrumentation

Not seeing your LangGraph workflow traces? Ensure you’re instrumenting both LangGraph and the underlying LLM providers. LangGraph workflows require instrumentation at multiple levels to capture the complete execution flow.
# ❌ Wrong - only instrumenting OpenAI, missing LangGraph workflow tracing
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

# ✅ Correct - instrument both LangGraph workflows and LLM providers
from openinference.instrumentation.langgraph import LangGraphInstrumentor
from openinference.instrumentation.openai import OpenAIInstrumentor

LangGraphInstrumentor().instrument(tracer_provider=tracer_provider)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)

Next steps

I