Galileo wrappers automatically capture prompts, responses, and performance metrics without requiring you to add explicit logging code throughout your application.

Just import the wrapper anywhere you were using the original library (for example, openai).

Available Wrappers

Galileo currently supports the following wrappers:

  • OpenAI Wrapper - A drop-in replacement for the OpenAI library that automatically logs all prompts, responses, and statistics.
  • LangChain Integration - A callback-based integration for LangChain that logs all LLM interactions within your LangChain workflows.

Basic Usage

OpenAI Wrapper

import os
from galileo.openai import openai

# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

def call_openai():
    chat_completion = client.chat.completions.create(
        messages=[{"role": "user", "content": "Say this is a test"}],
        model="gpt-4o"
    )
    return chat_completion.choices[0].message.content

# This will create a single span trace with the OpenAI call
response = call_openai()
print(response)

LangChain Integration

from galileo.handlers.langchain import GalileoCallback
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage

# Create a callback handler
callback = GalileoCallback()

# Initialize the LLM with the callback
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7, callbacks=[callback])

# Create a message with the user's query
messages = [HumanMessage(content="What is LangChain and how is it used with OpenAI?")]

# Make the API call
response = llm.invoke(messages)

print(response.content) 

Alternative Methods of Logging

If you’re using an LLM library that doesn’t have a dedicated Galileo wrapper, you can still log your application using:

  1. The @log Decorator - Add the @log decorator to functions that call LLMs to automatically capture inputs and outputs.
  2. Direct Use of the GalileoLogger Class - For more control, you can use the base logger class directly.

For detailed information on these alternative logging methods, see the Python SDK Overview.

Using with Context Manager

All wrappers work seamlessly with the galileo_context context manager for more control over trace management:

import os
from galileo import galileo_context
from galileo.openai import openai

# Initialize the Galileo wrapped OpenAI client
client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

# This will log to the specified project and log stream
with galileo_context(project="my-project", log_stream="my-log-stream"):
    chat_completion = client.chat.completions.create(
        messages=[{"role": "user", "content": "Say this is a test"}],
        model="gpt-4o"
    )
    print(chat_completion.choices[0].message.content)