LangChain Integration
The Galileo LangChain integration allows you to automatically log all LangChain interactions with LLMs, including prompts, responses, and performance metrics. This integration works through LangChain’s callbacks API, making it easy to add logging to your existing LangChain applications with minimal code changes.
Installation
First, make sure you have the Galileo SDK and LangChain installed:
Setup
Create or update a .env
file with your Galileo API key and other optional settings:
Basic Usage
The integration is based on the GalileoCallback
class, which implements LangChain’s callback interface. To use it, simply create an instance of the callback and pass it to your LangChain components:
When initializing the GalileoCallback
, you can optionally specify a Galileo logger instance or control trace behavior:
Using with LangChain Chains
You can also use the callback with LangChain chains. Make sure to pass the callback to both the LLM and the chain:
Advanced Usage
The GalileoCallback
captures various LangChain events, including:
- LLM starts and completions
- Chat model interactions
- Chain executions
- Tool calls
- Retriever operations
- Agent actions
For each of these events, the callback logs relevant information to Galileo, such as:
- Input prompts and messages
- Output responses
- Model information
- Timing data
- Token usage
- Error information (if any)
Adding Metadata
You can add custom metadata to your logs by including it in the metadata
parameter when invoking a chain or LLM:
This metadata will be attached to the logs in Galileo, making it easier to filter and analyze your data.
Nested Chains and Agents
The GalileoCallback
automatically handles nested chains and agents, creating a hierarchical trace that reflects the structure of your LangChain application:
Best Practices
-
Pass callbacks consistently: Make sure to pass the callback to all LangChain components (LLMs, chains, agents, etc.) to ensure complete logging.
-
Include meaningful metadata: Add relevant metadata to your invocations to make it easier to filter and analyze your logs.
-
Use with galileo_context: You can combine the LangChain integration with
galileo_context
for more control over trace management:
Related Resources
- OpenAI Wrapper - For automatic logging of OpenAI calls
- @log Decorator - For decorating functions with logging
- galileo_context - For managing trace context and automatic flushing