Overview
This guide walks you through running a LangGraph app with OpenAI, OTel, and OpenInference. In this guide, you’ll instrument workflows to capture your AI traces and LLM calls. OpenTelemetry provides industry-standard observability, while OpenInference extends it with AI-specific semantic conventions. Together, they give you complete visibility into your LangGraph workflows, LLM calls, and performance metrics.In this guide you will
Before you start
Below, you’ll find instructions on the key parts that come into play when using OpenTelemetry for observability.- Python 3.9+ installed
- A free Galileo account and API key
- An OpenAI API key
- Basic understanding of LangGraph concepts
- Familiarity with OpenTelemetry basics
Set up your LangGraph app with OpenTelemetry
For this how-to guide we’ll assume that you have some familiarity with LangGraph, as well as some familiarity with basic observability principles. If you’d like to see a full guide of this LangGraph application, check out the full cookbook. Below, you’ll find instructions on the key parts that come into play when using OpenTelemetry for observability, as well as the key parts to on how to integrate existing OTel compatible applications with Galileo.1
Install required dependencies
First, you’ll have to install any additional OTel-required dependencies to the application.
2
Configure OTel endpoints
In order for OTel logs to properly send to Galileo, you’ll need to configure the proper OTel endpoint. The OTel endpoint is different from Galileo’s regular API endpoint, and is specifically designed to receive telemetry data in the OTLP format.This is handled by the
OTLPSpanExporter
which is the component responsible for taking your traces and sending them to Galileo, converting them to Galileo in the process. These items are then batched to optimize performance.If you’re using a self-hosted or Galileo deployment other than app.galileo.ai, be sure to replace
app.galileo.ai
with your custom deployment URL.3
Create headers for OTel
Before any data is sent to Galileo, the application must ensure that the data is headed to the right place, and in a secure manner. OpenTelemetry has specific requirements for how authentication and metadata needs to be formatted.OpenTelemetry requires headers to be set in the
OTEL_EXPORTER_OTLP_TRACES_HEADERS
environment variable.4
Configure OpenTelemetry tracing
We’ll now need to assemble the complete OTel system together by creating a tracer provider, configuring it with your service metadata and the setting up a pipeline that batches and exports the traces to the OTel endpoint.Without this OpenTelemetry has no way of knowing where to send your traces or how to identify your service.
5
Apply OpenInference instrumentation
Enable automatic AI observability by applying OpenInference instrumentors. These automatically capture LLM calls, token usage, and model performance without requiring changes to your existing code.What this enables automatically:
- LangGraph operations and OpenAI API calls are traced
- Token usage and model information is captured
- Performance metrics and errors are recorded
6
Viewing your traces in Galileo
Once your application is running with OpenTelemetry configured, you can view your traces in the Galileo dashboard. Navigate to your project and Log stream to see the complete trace graph showing your LangGraph workflow execution.
The trace graph displays:

- Workflow spans showing the execution flow through your LangGraph nodes
- LLM call details with token usage and model information
- Performance metrics including timing and resource utilization
- Error tracking if any issues occur during execution
Run your application with OpenTelemetry
With OpenTelemetry correctly configured, your application will now automatically capture and send observability data to Galileo with every run. You’ll see complete traces of your LangGraph workflows, detailed LLM call breakdowns with token counts, and performance insights organized by project and Log stream in your Galileo dashboard. This provides consistent, well-structured logging across all your AI applications without requiring additional code changes, enabling effective monitoring, debugging, and optimization at scale.
OpenInference semantic conventions for LangGraph
When running your LangGraph app with OpenInference, Galileo automatically applies semantic conventions to your traces, capturing model information, token usage, and performance metrics without any additional code. For advanced use cases, you can also manually add custom attributes to enhance your traces with domain-specific information:1
Span attributes
2
Events
Troubleshooting your LangGraph app
Here are some common troubleshooting steps when using OpenTelemetry and OpenInference.Headers not formatted correctly
Not seeing your OTel traces in Galileo? Double checker your header formatting. OpenTelemetry requires headers in a specific comma-separated string format, not as a dictionary.Wrong endpoint
Console URL incorrect
For custom Galileo deployments, replaceapp.galileo.ai
with your deployment URL.