Skip to main content
You can use runtime protection inside your LangChain and LangGraph apps to intercept any inputs or outputs that trigger rulesets, and stop the chain.

Basic usage

There are 2 main components to adding runtime protection:
  • ProtectTool - a LangChain tool that is configured with a stage, and optionally prioritized rulesets for local stages
  • ProtectParser - a parser that checks the results of the ProtectTool, and runs the next step in the chain if the rulesets are not triggered
You can then chain these together to create a runnable protected chain.
from galileo.handlers.langchain.tool import ProtectTool, ProtectParser

from langchain_openai import ChatOpenAI

# Create a ProtectTool instance
protect_tool = ProtectTool(
    stage_name="My stage"
)

# Create a LangChain LLM instance
llm = ChatOpenAI(model="gpt-4o")

# Create a ProtectParser instance, passing the LLM as the chain to be invoked
protect_parser = ProtectParser(chain=llm, echo_output=True)

# Define the chain with Protect.
protected_chain = protect_tool | protect_parser.parser

# Invoke the protected chain
response = protected_chain.invoke({"input": query})
If the rulesets are triggered, then the response will depend on the action assigned to the ruleset that was triggered. If the action is a Passthrough action, the original input is returned to be processed by the calling application. If the action is an Override then a randomly selected choice is returned. If no rulesets are triggered, then the chain is run and the relevant message type is returned by LangChain.
# Invoke the chain
response = protected_chain.invoke({"input": query})

# Check the response type
if isinstance(response, str):
    # This indicates the ProtectTool intervened and returned a string directly
    # such as the random choice from an Override action
    print(f"🛡️ Intercepted/Modified - Protect Response: {response}")
else:
    # This means the LLM part of the chain was executed
    print(f"✅ Allowed - LLM Response: {response.content}")

Integrate with logging

You can pass a RunnableConfig when invoking the protected chain, allowing you to add a GalileoCallback logger callback handler.
from galileo.handlers.langchain import GalileoCallback
from langchain_core.runnables.config import RunnableConfig

# Create a callback handler
galileo_callback = GalileoCallback()

# Create the config with the callback
config = RunnableConfig(
            callbacks=[galileo_callback]
        )

# Invoke the chain with the callback
response = protected_chain.invoke({"input": query},
                                  config=config)

Next steps

Basic protection components