Overview
When working with OpenAI’s API, it’s important to set up your environment and client correctly to ensure secure and efficient API calls. This guide shows you how to create a basic integration using Galileo’s OpenAI client wrapper.
In this guide you will:
- Set up a project with Galileo
- Create a chat client using the Galileo OpenAI wrapper
The Galileo OpenAI wrapper currently only supports the synchronous chat completions API.
Before you start
To complete this how-to, you will need:
Install dependencies
To use Galileo, you need to install some package dependencies, and configure environment variables.
Install Required Dependencies
Install the required dependencies for your app. If you are using Python, create a virtual environment using your preferred method, then install dependencies inside that environment:pip install "galileo[openai]" python-dotenv
Create a .env file, and add the following values
# Your Galileo API key
GALILEO_API_KEY="your-galileo-api-key"
# Your Galileo project name
GALILEO_PROJECT="your-galileo-project-name"
# The name of the Log stream you want to use for logging
GALILEO_LOG_STREAM="your-galileo-log-stream"
# Provide the console url below if you are using a
# custom deployment, and not using the free tier, or app.galileo.ai.
# This will look something like “console.galileo.yourcompany.com”.
# GALILEO_CONSOLE_URL="your-galileo-console-url"
# OpenAI properties
OPENAI_API_KEY="your-openai-api-key"
# Optional. The base URL of your OpenAI deployment.
# Leave this commented out if you are using the default OpenAI API.
# OPENAI_BASE_URL="your-openai-base-url-here"
# Optional. Your OpenAI organization.
# OPENAI_ORGANIZATION="your-openai-organization-here"
This assumes you are using a free Galileo account. If you are using a custom deployment, then you will also need to add the URL of your Galileo Console:GALILEO_CONSOLE_URL=your-Galileo-console-URL
Create a chat client using the Galileo OpenAI wrapper
Create a file for your application called app.py or app.ts.
Add code to call OpenAI
Add the following code to your application file:from galileo.openai import openai
from dotenv import load_dotenv
# Load the Galileo and OpenAI environment variables
load_dotenv()
# Create the Galileo wrapped OpenAI client
client = openai.OpenAI()
# Define a prompt
prompt = "Explain the following topic succinctly: Newton's First Law"
# Get a response from OpenAI
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}],
)
# Print the response
print(response.choices[0].message.content.strip())
Run the app
When the app runs, the span will be logged automatically, with the input as the prompt, the output as the returned response. The duration and number of tokens will also be logged. View the logged trace
From the Galileo Console, open the Log stream for your project. You will see a trace with a single span containing the logged function call.
Your logging is now set up! You are ready to configure metrics for your project.
See also