Installation
Install the required OpenTelemetry and OpenInference dependencies:Tested with:
opentelemetry-api==1.23.0
opentelemetry-sdk==1.23.0
openinference-instrumentation-langchain==0.3.1
Authentication
- Generate an API key from your Galileo account.
- Copy your project name and Log stream name from the Galileo Console.
OpenTelemetry setup
Place this setup at the top of your script, before initializing your LLM or pipeline framework:Note: Apply instrumentation before importing or initializing your framework (e.g., LangChain, OpenAI).
Full working example
Check out a complete working example on Colab: 🔗 OpenTelemetry + Galileo Integration NotebookSummary
To enable OTel logging with Galileo:- Set the OTel headers using your API key, project name, and Log stream name.
- Set the exporter endpoint:
https://app.galileo.ai/api/galileo/otel/traces
- Register your tracer provider and processors.
- Import and apply the OpenInference instrumentor(s) before initializing your application.
Supported instrumentors
These can be used individually or in combination, depending on your framework:- OpenAI
- LLama Index
- DSPy
- Bedrock
- LangChain
- Mistral
- Guardrails
- Vertex AI
- CrewAI
- Haystack
- LiteLLM
- Groq
- Instructor
- Anthropic
FAQ
How do I create a project?
Log in to the Galileo Console and create a new logging project. After creation, copy the project name and Log stream name from the interface.Which instrumentors should I import?
Choose the instrumentor(s) matching your framework:Instructor example with client:
What is the correct OTel endpoint?
Use the Galileo API base URL with the/otel/traces
path:
https://console.your_cluster_url.com
, the endpoint becomes http://api.your_cluster_url.com/otel/traces
.
How do I verify it’s working?
- Use
ConsoleSpanExporter
to see local trace output. - Check the Galileo dashboard for incoming traces.
- Add a test span like this:
Troubleshooting
- ✅ Confirm your API key, project name, and Log stream name are correct.
- ✅ Make sure the environment variable is correctly formatted.
- ✅ Ensure the endpoint is accessible from your environment.
- ✅ Instrument your framework before using it.
- ✅ Restart environment after changing environment variables.
- ✅ Use
print(os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'])
to debug your header string. - ✅ Make sure to use BatchSpanProcessor(OTLPSpanExporter(endpoint)) not
SingleSpanProcessor