The Galileo OpenAI wrapper currently only supports the synchronous chat completions API.
Python Galileo OpenAI SDK reference
The Python Galileo OpenAI SDK reference.
TypeScript Galileo OpenAI SDK reference
The TypeScript Galileo OpenAI SDK reference.
Installation
First, make sure you have the Galileo SDK installed. If you are using Python, ensure you install the OpenAI optional dependency.Basic usage
If you are using Python, import thegalileo.openai
module, instead of the OpenAI openai
module and use that to create your client. If you are using TypeScript, use the wrapper to wrap your OpenAI client.
- The input prompt
- The model used
- The response
- Timing information
- Token usage
- Other relevant metadata
Sessions and traces
If you use the OpenAI wrapper by itself, it will automatically create a session and start a new trace for you, adding the call as an LLM span. Subsequent calls will be added as an LLM span to a new trace in the same session. The session will have an autogenerated name based off the content.


Streaming support
The OpenAI wrapper also supports streaming responses. When streaming, the wrapper will log the response as it streams in:Combining with the log decorator
You can combine the OpenAI wrapper with thelog
decorator to create more complex traces:
Benefits of using the OpenAI integration
- Zero-config logging: No need to add logging code throughout your application
- Complete visibility: All prompts and responses are automatically captured
- Minimal code changes: Change your import statement in Python, or create a wrapper in TypeScript. No other code changes are required.
- Automatic tracing: Creates spans and traces without manual setup
- Streaming support: Works with both regular and streaming responses
Asynchronous OpenAI calls with Galileo
Galileo’s Python SDK includes an OpenAI wrapper that currently supports only synchronous calls through the OpenAI client. It currently doesn’t not include built-in support for theAsyncOpenAI
class from the official OpenAI Python library. As a result, asynchronous calls made via galileo.openai
wrapper won’t automatically generate LLM spans or upload telemetry to Galileo.You can still track async interactions by manually using the low-level GalileoLogger
API. This requires importing and awaiting the OpenAI AsyncOpenAI
client, wrapping each call with a call to add an LLM span, and flushing the logger to send your traces.