The OpenAI wrapper provides a simple way to automatically log all OpenAI API calls to Galileo. It wraps the official OpenAI Node.js client and intercepts all API calls to log them.

Installation

npm install galileo openai
# or
yarn add galileo openai

Usage

import { OpenAI } from "openai";
import { init, flush, wrapOpenAI } from "galileo";

// Initialize Galileo
init({
  projectName: "my-project",
  logStreamName: "development",
});

// Create a wrapped OpenAI client
const openai = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));

async function callOpenAI() {
  // Use the wrapped client as you normally would
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: "Say hello world!" }],
  });

  console.log(response.choices[0].message.content);

  // Flush logs before exiting
  await flush();

  return response;
}

// Call the function
callOpenAI();

Advanced Usage

With Log Function Wrapper

You can use the OpenAI wrapper with the log function wrapper to create a workflow span with nested LLM calls:

import { OpenAI } from "openai";
import { log, init, flush, wrapOpenAI } from "galileo";

// Initialize Galileo
init({
  projectName: "my-project",
  logStreamName: "development",
});

const openai = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));

// This will automatically create an llm span since we're using the `wrapOpenAI` wrapper
const callOpenAI = async (input: any) => {
  const result = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ content: input, role: "user" }],
  });
  return result;
};

async function runWorkflow() {
  const wrappedFunc = await log({ name: "capitals workflow" }, async () => {
    const franceResult = await callOpenAI("What is the capital of France?");
    const germanyResult = await callOpenAI("What is the capital of Germany?");

    return {
      france: franceResult.choices[0].message.content,
      germany: germanyResult.choices[0].message.content,
    };
  });

  const result = await wrappedFunc();
  console.log(result);

  // Flush logs before exiting
  await flush();

  return result;
}

// Run the workflow
runWorkflow();

Multiple LLM Calls in a Workflow

You can make multiple LLM calls within a workflow span:

import { OpenAI } from "openai";
import { log, init, flush, wrapOpenAI } from "galileo";

// Initialize Galileo
init({
  projectName: "my-project",
  logStreamName: "development",
});

const openai = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));

const getCapital = async (country) => {
  const response = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: `What is the capital of ${country}?` }],
  });
  return response.choices[0].message.content;
};

async function runWorkflow() {
  const getCapitals = log({ name: "getCapitals", spanType: "workflow" }, async () => {
    const franceCapital = await getCapital("France");
    const germanyCapital = await getCapital("Germany");
    return { france: franceCapital, germany: germanyCapital };
  });

  const result = await getCapitals();
  console.log(result);

  // Flush logs before exiting
  await flush();

  return result;
}

// Run the workflow
runWorkflow();

Benefits of Using the Wrapper

  • Zero-config logging: No need to add logging code throughout your application
  • Complete visibility: All prompts and responses are automatically captured
  • Minimal code changes: Simply change your import statement
  • Automatic tracing: Creates spans and traces without manual setup
  • Streaming support: Works with both regular and streaming responses

Asynchronous OpenAI Calls with Galileo

Galileo’s Typescript SDK includes an OpenAI wrapper that currently supports only synchronous calls through the OpenAI client. It currently doesn’t not include built-in support for the AsyncOpenAI class from the official OpenAI Typescript library. As a result, asynchronous calls made via galileo.openai wrapper won’t automatically generate LLM spans or upload telemetry to Galileo.

You can still track async interactions by manually using the low-level GalileoLogger API. This requires importing and awaiting the OpenAI AsyncOpenAI client, wrapping each call with add_llm_span (or using start_trace/ conclude), and flushing the logger to send your traces.