Overview

This tutorial will guide you through creating and using a Session in Galileo, using a simple LLM-driven example that you can expand to multiple agents and data sources. It is a quick way to introduce you to logging sessions. By the end of this guide, you will know how to:

  1. Initialize a logging session
  2. Add events to your session
  3. Inspect the Session in the Galileo Console to see all related Traces and Spans.

There will be minor differences around starting and flushing the session context, depending on whether you’re using the automatic or manual way. We’ll cover both below.

Prerequisites

  • Galileo Account: Ensure you have signed up for a Galileo account. This should provide you with the following values:
    • GALILEO_API_KEY: Your API key
    • GALILEO_PROJECT: The name of your Galileo Project
    • GALILEO_LOG_STREAM: The log stream where you will save your sessions
  • OpenAI API Key: This example will use OpenAI as the underlying LLM, so you will need an API key from them.

In addition, this tutorial assumes you are familiar with:

  • Simple LLM Apps, and making simple OpenAI completion calls using Python or TypeScript
  • The GalileoLogger class from the Python or TypeScript sdk

Project setup

Let’s take a moment to prepare the development environment. If you already have a project setup with Galileo, LangChain, and LangGraph, you can skip right to Manage a Session. If not, here’s an abbreviated quickstart:

1

Install dependences

We’ll need the Galileo Python or TypeScript SDK, LangChain, LangGraph, OpenAI, and dotenv to pull in variables from your .env file. Let’s start by installing them:

pip install "galileo[openai]" langchain langchain-openai langgraph python-dotenv 
2

Create a .env file

Next, create a .env file and add in the following variables:

.env
# Galileo properties
GALILEO_API_KEY=your-galileo-api-key
GALILEO_PROJECT=your-project-name
GALILEO_LOG_STREAM=your-log-stream-name

# OpenAI properties
OPENAI_API_KEY=your-openai-api-key
3

Create your application logic file

Finally, create a main script file (e.g. main.py or main.ts) where you’ll add and run your application logic.

Now we can dive in.

Manage a Session

Recall our objectives from earlier? We’ll build a simple application and use it to work through each step. If you’re in a hurry, you jump to the full code sample here, then return to see how it was put together.

Steps

1

Create a simple agent

In your main script, import the following dependencies. Let’s begin by creating a very simple agent using LangGraph and OpenAI:

from time import time
from dotenv import load_dotenv

# Galileo dependencies
from galileo import GalileoLogger
from galileo.handlers.langchain import GalileoCallback

# LangChain and LangGraph dependencies
from langgraph.prebuilt import create_react_agent
from langchain.schema.runnable.config import RunnableConfig

# Load `.env` variables
load_dotenv(override=True)

# Create a simple assistant for our test (or import one). You can also provide
# your agent with tools: the session will log their usage
simple_agent = create_react_agent(
    name="simple_agent",
    model="openai:o3-mini",  # you can choose any OpenAI model here
    prompt="You are a friendly assistant that answers the user's questions",
    tools=[],  # (OPTIONAL) provide tools to your agent
)

We’ll see GalileoCallback and RunnableConfig in action later. For now, let’s move on to the next step.

2

Create a Logger Instance

We’ll be using the GalileoLogger to manage our logging session. Let’s create one next:

# Create a GalileoLogger instance for our session
logger = GalileoLogger()
3

Start a logging session

Our simple application will have a main function where everything happens. The first thing we will do in this function is start up a logging session. This will prepare the logger to group all captured events under a single session.

Below, we give the session a unique name and external id. The name helps us find the session more easily in the Galileo Console. The external id is to link this session to external tracing: for example, linking to a conversation ID in your chatbot app by an ID created inside that app.

def main():
    """Main application logic"""

    # start a logging session
    external_id = f"custom_id-{int(time())}"
    logger.start_session(name="Logger Session Tutorial", external_id=external_id)

Treat logger.start_session like a lifecycle event, and call it before any code you want to monitor. The name and external id arguments are optional but recommended.

4

Add your LLM logic

Now you can interact with your LLM. Our very simple application will invoke the LLM with two questions: each question will be a question/answer exchange that generates a Trace with child spans in our session. We will also pass a callback handler, which will be called by LangChain after each LLM invocation.

Here’s our full main function: you can make this part as complex as you like!

def main():
    """Main application logic"""

    # start a logging session
    external_id = f"custom_id-{int(time())}"
    logger.start_session(name="Logger Session Tutorial", external_id=external_id)

    # Here's what we will ask the LLM:
    prompts = [
        "Hello! How many minutes are in a year?",
        "Hello! How far is an Astronomical Unit in kilometers?",
    ]

    # Create a LangChain Runnable config object with a LangGraph callback handler:
    # We will supply the logger instance to ensure that it generates traces in the
    # correct session
    agent_config = RunnableConfig(callbacks=[GalileoCallback(galileo_logger=logger)])

    for prompt in prompts:
        # Invoke the LLM with our question:
        response = simple_agent.invoke(
            input={"messages": [{"role": "user", "content": prompt}]},
            config=agent_config,  # pass the RunnableConfig here
        )
        # Print out the LLM's response to confirm that this code block ran:
        print("Model response:", response["messages"][-1].content.strip())

The GalileoCallback handler

GalileoCallback is a callback handler specifically for LangChain. It sends the most-recent captured traces to Galileo Console when it is called behind the scenes: your LLM logic determines what traces are generated and/or captured.

Full Code Sample

Here’s everything we have done so far:

Run your script

That’s all the code: we have now learned to use logger.start_session before starting LLM chat session, and supply GalileoCallback to ensure your traces get sent to the Galileo Console.

Now let’s run the script:

python main.py 

You should see the LLM’s response in your terminal! You can also head to the Galileo Console to view the newly-created session. (Shown below)

View your session

Now that you’ve logged a session, it’s time to view results.

1

Log in to the Galileo Console and select your Log Stream

Head over to the Galileo Console and log in.

On your dashboard, select the Log Stream where you were sending your session logs. If you didn’t specify a unique or new log stream name, you will find the logs in your default Log Stream.

2

Select your session

Selecting the log stream will bring you to its event records. All logs will be grouped by Session, though you can use the control near the top-left of your screen to change the log stream’s event grouping:

Your session should be visible in the table below the controls, especially if you gave it a recognizable name. Select it to view the traces.

3

View your session

Once you select your session, you can see the Traces you captured from your test run as a flowchart. Any tools that were used will also show up as individual Spans.

Select the nodes of the flowchart to see their inputs and outputs on the right-edge of your screen.

4

Optional: View individual Spans

Each message from the user and response from the LLM will form a single trace; you can view the contents here in a familiar format, as well as other details like tool calls. Just select the Messages tab (shown in the image below) to see a list of traces in the session, along with their child spans. You can select a span to see metrics and other details on the right edge of the screen (not pictured)

Additional Considerations

Remember to always use the same GalileoLogger instance across your project. This ensures that all captured events are placed in the same session. You can achieve this in a few ways:

  1. Export your logger instance from a separate module, so that your application uses a singleton instance.

  2. Use the TypeScript SDK’s getLogger function, or the Python SDK’s galileo_context context manager for a consistent reference:

    from galileo import galileo_context
    
    # Create a new session
    galileo_context.start_session(""" ...optional args """)
    
    # Application logic follows
    
    # Flush the session (if you are not using galileo callback or "with galileo_context()")
    galileo_context.flush()
    
  3. You can also add Traces wherever you see fit. A Trace might represent a question asked to your LLM, and the response generated for it — as well as any tools used! Galileo will generate traces for you, but you can also create new ones by using your logger instance:

    question = "What is the meaning of plenipotentiary?" 
    logger.start_trace(input=question)
    logger.add_llm_span(
        input=question, 
        output="Plenipotentiary means 'Invested with full power'"
    )
    logger.conclude() # end the trace
    
    You can learn more about traces and how to use them here.

Conclusion

In this tutorial, you learned how to:

  1. Create a logging session with the GalileoLogger class
  2. Manually start your own session with the logger.start_session() method
  3. View your sessions in the Galileo Console.

Next Steps

For a more detailed walkthrough of a multi-agent application, take a look at Monitoring LangChain Agents with Galileo. You can also learn more about using Galileo’s metrics to gain more insight about your AI application.