Wrappers
OpenAI Wrapper
Overview
Getting Started
SDK/API
- Python SDK Reference
- TypeScript SDK Reference
- Overview
- Wrappers
- Core Logging
- Datasets
- Experiments
- Prompts
- Typescript SDK Reference
How-to Guides
- Overview
- Conversational AI
- Retrieval-Augmented Generation
- Agentic AI
Cookbooks
- Features
Concepts
Wrappers
OpenAI Wrapper
The OpenAI wrapper provides a simple way to automatically log all OpenAI API calls to Galileo. It wraps the official OpenAI Node.js client and intercepts all API calls to log them.
Installation
npm install galileo openai
# or
yarn add galileo openai
Usage
import { OpenAI } from "openai";
import { init, flush, wrapOpenAI } from "galileo";
// Initialize Galileo
init({
projectName: "my-project",
logStreamName: "development"
});
// Create a wrapped OpenAI client
const openai = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));
async function callOpenAI() {
// Use the wrapped client as you normally would
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Say hello world!" }],
});
console.log(response.choices[0].message.content);
// Flush logs before exiting
await flush();
return response;
}
// Call the function
callOpenAI();
Advanced Usage
With Log Function Wrapper
You can use the OpenAI wrapper with the log
function wrapper to create a workflow span with nested LLM calls:
import { OpenAI } from "openai";
import { log, init, flush, wrapOpenAI } from "galileo";
// Initialize Galileo
init({
projectName: "my-project",
logStreamName: "development"
});
const openai = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));
// This will automatically create an llm span since we're using the `wrapOpenAI` wrapper
const callOpenAI = async (input) => {
const result = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ content: input, role: 'user' }]
});
return result;
};
async function runWorkflow() {
const wrappedFunc = await log({ name: 'capitals workflow' }, async () => {
const franceResult = await callOpenAI("What is the capital of France?");
const germanyResult = await callOpenAI("What is the capital of Germany?");
return {
france: franceResult.choices[0].message.content,
germany: germanyResult.choices[0].message.content
};
});
const result = await wrappedFunc();
console.log(result);
// Flush logs before exiting
await flush();
return result;
}
// Run the workflow
runWorkflow();
Multiple LLM Calls in a Workflow
You can make multiple LLM calls within a workflow span:
import { OpenAI } from "openai";
import { log, init, flush, wrapOpenAI } from "galileo";
// Initialize Galileo
init({
projectName: "my-project",
logStreamName: "development"
});
const openai = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));
const getCapital = async (country) => {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: `What is the capital of ${country}?` }],
});
return response.choices[0].message.content;
};
async function runWorkflow() {
const getCapitals = log(
{ name: "getCapitals", spanType: "workflow" },
async () => {
const franceCapital = await getCapital("France");
const germanyCapital = await getCapital("Germany");
return { france: franceCapital, germany: germanyCapital };
}
);
const result = await getCapitals();
console.log(result);
// Flush logs before exiting
await flush();
return result;
}
// Run the workflow
runWorkflow();