Overview

This guide shows you how to log spans to Galileo using the @log decorator in Python, or the log wrapper in TypeScript.

For this example, you will be using the async OpenAI SDK, which is not supported by the Galileo OpenAI wrapper. Instead you will use the @log decorator or log wrapper to log function calls as LLM spans.

Before you start

To complete this how-to, you will need:

Install dependencies

To use Galileo, you need to install some package dependencies, and configure environment variables.

1

Install Required Dependencies

Install the required dependencies for your app. If you are using Python, create a virtual environment using your preferred method, then install dependencies inside that environment:

pip install "galileo[openai]" python-dotenv
2

Create a `.env` file, and add the following values

.env
GALILEO_API_KEY=your_galileo_api_key
GALILEO_PROJECT=your_project_name
GALILEO_LOG_STREAM=your_log_stream

OPENAI_API_KEY=your_openai_api_key

Create the basic app to call OpenAI

1

Create a file for your application called `app.py` or `app.ts`.

2

Add the following code to call OpenAI to ask a question

import os
import asyncio
import openai
from dotenv import load_dotenv

load_dotenv()

client = openai.AsyncOpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

async def prompt_open_ai(prompt: str) -> str:
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content.strip()

async def main():
    prompt = "Explain the following topic succinctly: Newton's First Law"
    response = await prompt_open_ai(prompt)
    print(response)

if __name__ == "__main__":
    asyncio.run(main())

If you are using TypeScript, you will also need to configure your code to use ESM. Add the following to your package.json file:

package.json
{
  "type": "module",
  ... // Existing contents
}
3

Run the app to ensure everything is working

python app.py

You should see a description of Newton’s first law.

(.venv) ➜  python app.py   
Newton's First Law, also known as the Law of Inertia, states that an object
at rest will stay at rest and an object in motion will stay in motion with
the same speed and in the same direction, unless acted upon by an
unbalanced force. In simpler terms, it means that an object will keep doing
what it's currently doing until a force makes it do something different.
This law is fundamental to understanding motion and forces as it pertains
to physics.

Add simple logging with the Galileo log decorator or wrapper

Galileo has a @log decorator in Python, and a log wrapper in TypeScript that logs function calls as spans. If these decorated or wrapped calls are called whilst there is an active trace, they are added to that trace. If there is no active trace, a new one is created for this span.

In this guide, you will be adding the decorator or wrapper to log the function that calls OpenAI.

1

Import the log decorator

At the top of your file, add an import for the log decorator:

from galileo import log
2

Decorate or wrap the function

Update the function definition to include the decorator or wrapper:

@log(span_type="llm", name="OpenAI GPT-4o-mini")
async def prompt_open_ai(prompt: str) -> str:

This will log the function as an LLM span using the span type parameter. You can read more about these in our span types documentation.

This also sets the name of the span to “OpenAI GPT-4o-mini”.

3

Run the Python app

python app.py

When the app runs, the span will be logged automatically, with the input as the prompt, the output as the returned response. The duration will also be logged.

4

View the logged trace

From the Galileo Console, open the log stream for your project. You will see a trace with a single span containing the logged function call.

Select the trace to see a detailed view:

Select the OpenAI span to see the latency.

Your logging is now set up! You are ready to configure metrics for your project.

See also