Overview

What you’ll build:

A simple LangChain-powered AI Agent that uses OpenAI’s language models and a custom tool, with all agent activities logged and monitored in Galileo.

What you’ll learn:

  • How to configure a LangChain Agent
  • How to integrate Galileo for observability and monitoring
  • How to structure tools and environment for scalable development

👀 Check out the full SDK Examples repository on GitHub!

Requirements

Environment Setup

Ingredients:

  • git
  • Python environment tools
  • Package manager (pip or uv)

Steps:

  1. Clone the repository:

    git clone https://github.com/rungalileo/sdk-examples.git
    cd sdk-examples/python/agent/langchain-agent
    
  2. Create a virtual environment: on Windows

    python -m venv venv
    venv\Scripts\activate
    

    on Mac Using a standard virtual environment

    python -m venv venv
    source venv/bin/activate
    

    Or using uv (faster)

    uv venv venv
    source venv/bin/activate
    
  3. Install dependencies: Using pip

    pip install -r requirements.txt
    

    OR using uv

    uv pip install -r requirements.txt
    
  4. Set up your environment variables: Copy the existing .env.example file, and rename it to .env in your project directory.

    OPENAI_API_KEY=your-openai-api-key
    GALILEO_API_KEY=your-galileo-api-key
    
  • Replace your-openai-api-key and your-galileo-api-key with your actual keys.
  • This keeps your credentials secure and out of your code.

Understanding the Agent Architecture

🧠 Agent Core (main.py)

A single script defines:

  • Loading of secrets
  • Tool declaration
  • Agent instantiation
  • Galileo observability

🛠️ Tools

Simple @tool functions that the agent can call, such as:

@tool
def greet(name: str) -> str:
    """Say hello to someone."""
    return f"Hello, {name}! 👋"

🔍 Instrumentation (galileo_context + GalileoCallback)

The galileo_context tags all logs under a project and stream.

The GalileoCallback automatically traces agent behavior in Galileo.

Main Agent Workflow

Key Ingredients:

  • LangChain agent
  • OpenAI model
  • Galileo integration

How it works:

  1. Load .env variables.
  2. Declare tools.
  3. Wrap agent execution in galileo_context.
  4. Use GalileoCallback to trace the run.
  5. Print the agent’s response.

Running the Agent

Run your script using:

python main.py

Expected Output:

Agent Response:
Hello, Erin! 👋

Viewing Traces in Galileo

Steps:

  1. Log into Galileo.

  2. Open the langchain-docs project and my_log_stream.

  3. Inspect:

    • Prompts
    • Reasoning steps
    • Tool invocations
    • Outputs

Extending the Agent

Add New Tools

Define more @tool-decorated functions and include them in the agent.

Change Models

Swap out gpt-4 for another supported OpenAI model in ChatOpenAI.

Update Context

Change the project and log_stream in galileo_context for better trace organization.

Conclusion

Key Takeaways:

  • LangChain + Galileo makes AI agents traceable and observable
  • Using tools and context managers helps modularize and organize agent behavior
  • Monitoring enables better debugging and optimization

Next Steps:

Happy building! 🚀

Common Issues and Solutions

API Key Issues

Problem: “Invalid API key” errors

Solution:

  • Double-check your .env file

Galileo Connection Issues

Problem: Traces aren’t showing up in Galileo

Solution:

  • Confirm your API key is valid
  • Check internet connectivity
  • Ensure flush() is being called at the end of execution