The OpenAI wrapper is the simplest way to integrate Galileo logging into your application. By using Galileo’s OpenAI wrapper instead of importing the OpenAI library directly, you can automatically log all prompts, responses, and statistics without any additional code changes.

Installation

First, make sure you have the Galileo SDK installed:

pip install galileo

Setup

Create or update a .env file with your Galileo API key and other optional settings:

# Scoped to an Organization
GALILEO_API_KEY=...

# Optional, set a default Project
GALILEO_PROJECT=...
# Optional, set a default Log Stream
GALILEO_LOG_STREAM=... 

Basic Usage

Instead of importing OpenAI directly, import it from Galileo:

import os
from galileo.openai import openai

# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

def call_openai():
    chat_completion = client.chat.completions.create(
        messages=[{"role": "user", "content": "Say this is a test"}],
        model="gpt-4o"
    )
    return chat_completion.choices[0].message.content

# This will create a single span trace with the OpenAI call
response = call_openai()
print(response)

This example will automatically produce a single-span trace in the Galileo Logstream UI. The wrapper handles all the logging for you, capturing:

  • The input prompt
  • The model used
  • The response
  • Timing information
  • Token usage
  • Other relevant metadata

Using with Context Manager

For more control over when traces are flushed to Galileo, you can use the galileo_context context manager:

import os
from galileo import galileo_context
from galileo.openai import openai

# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# This will log to the specified project and log stream
with galileo_context(project="my-project", log_stream="my-log-stream"):
    chat_completion = client.chat.completions.create(
        messages=[{"role": "user", "content": "Say this is a test"}],
        model="gpt-4o"
    )
    print(chat_completion.choices[0].message.content)

This ensures that traces are flushed when the context manager exits, which is particularly useful for long-running applications like Streamlit where the request never terminates.

Streaming Support

The OpenAI wrapper also supports streaming responses. When streaming, the wrapper will log the response as it streams in:

import os
from galileo.openai import openai

client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

stream = client.chat.completions.create(
    messages=[{"role": "user", "content": "Say this is a test"}],
    model="gpt-4o",
    stream=True,
)

# This will create a single span trace with the OpenAI call
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Explicit Flushing

In some cases (like long-running processes), it may be necessary to explicitly flush the trace to upload it to Galileo:

import os
from galileo.openai import openai
from galileo import galileo_context

galileo_context.init(project="your-project-name", log_stream="your-log-stream-name")

# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

def call_openai():
    chat_completion = client.chat.completions.create(
        messages=[{"role": "user", "content": "Say this is a test"}],
        model="gpt-4o"
    )
    return chat_completion.choices[0].message.content

# This will create a single span trace with the OpenAI call
call_openai()

# This will upload the trace to Galileo
galileo_context.flush()

Advanced Usage

The OpenAI wrapper is intended to support all the same functionality as the original OpenAI library, including:

  • Chat completions
  • Text completions
  • Embeddings
  • Image generation
  • Audio transcription and translation

For each of these, the wrapper will automatically log the relevant information to Galileo, making it easy to track and analyze your AI application’s performance.

Combining with the @log Decorator

You can combine the OpenAI wrapper with the @log decorator to create more complex traces:

import os
from galileo.openai import openai
from galileo import log

client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

def call_openai(prompt):
    chat_completion = client.chat.completions.create(
        messages=[{"role": "user", "content": prompt}],
        model="gpt-4o"
    )
    return chat_completion.choices[0].message.content

@log
def make_nested_call():
    first_result = call_openai("Tell me about the Roman Empire")
    second_result = call_openai(f"Summarize this: {first_result}")
    return second_result

# This will create a trace with a workflow span and two nested LLM spans
response = make_nested_call()
print(response)

Benefits of Using the Wrapper

  • Zero-config logging: No need to add logging code throughout your application
  • Complete visibility: All prompts and responses are automatically captured
  • Minimal code changes: Simply change your import statement
  • Automatic tracing: Creates spans and traces without manual setup
  • Streaming support: Works with both regular and streaming responses