Wrappers
Overview
Galileo wrappers automatically capture prompts, responses, and performance metrics without requiring you to add explicit logging code throughout your application.
Just import the wrapper anywhere you were using the original library (for example, openai
).
Available Wrappers
Galileo currently supports the following wrappers:
- OpenAI Wrapper - A drop-in replacement for the OpenAI library that automatically logs all prompts, responses, and statistics.
- LangChain Integration - A callback-based integration for LangChain that logs all LLM interactions within your LangChain workflows.
Basic Usage
OpenAI Wrapper
LangChain Integration
Alternative Methods of Logging
If you’re using an LLM library that doesn’t have a dedicated Galileo wrapper, you can still log your application using:
- The
@log
Decorator - Add the@log
decorator to functions that call LLMs to automatically capture inputs and outputs. - Direct Use of the
GalileoLogger
Class - For more control, you can use the base logger class directly.
For detailed information on these alternative logging methods, see the Python SDK Overview.
Using with Context Manager
All wrappers work seamlessly with the galileo_context
context manager for more control over trace management:
Related Resources
- @log Decorator - For decorating functions with logging
- GalileoLogger - For more manual control over logging
- galileo_context - For managing trace context and automatic flushing