OpenAI Wrapper
The OpenAI wrapper provides a simple way to automatically log all OpenAI API calls to Galileo. It wraps the official OpenAI Node.js client and intercepts all API calls to log them.
Installation
Usage
Advanced Usage
With Log Function Wrapper
You can use the OpenAI wrapper with the log
function wrapper to create a workflow span with nested LLM calls:
Multiple LLM Calls in a Workflow
You can make multiple LLM calls within a workflow span:
Benefits of Using the Wrapper
- Zero-config logging: No need to add logging code throughout your application
- Complete visibility: All prompts and responses are automatically captured
- Minimal code changes: Simply change your import statement
- Automatic tracing: Creates spans and traces without manual setup
- Streaming support: Works with both regular and streaming responses
Asynchronous OpenAI Calls with Galileo
Galileo’s Typescript SDK includes an OpenAI wrapper that currently supports only synchronous calls through the OpenAI client. It currently doesn’t not include built-in support for the AsyncOpenAI
class from the official OpenAI Typescript library. As a result, asynchronous calls made via galileo.openai
wrapper won’t automatically generate LLM spans or upload telemetry to Galileo.
You can still track async interactions by manually using the low-level GalileoLogger
API. This requires importing and awaiting the OpenAI AsyncOpenAI
client, wrapping each call with add_llm_span
(or using start_trace
/ conclude
), and flushing the logger to send your traces.