Overview
LangGraph provides two types for advanced graph control flow:
Command — Combines a state update with routing in a single return value. Instead of using conditional edges, a node can return Command(update={...}, goto="next_node") to update state and control which node runs next.
Send — Enables map-reduce (fan-out) patterns by dispatching work to a node with custom state. Returning a list of Send objects from an edge function runs the target node once per Send, in parallel.
Both types are useful for building dynamic, multi-path agents in LangGraph.
Galileo support
Galileo’s GalileoCallback and GalileoAsyncCallback handle Command and Send automatically. No additional configuration is needed — just pass the callback as usual and all control flow data is captured.
Using Command
Command lets a node update state and route to the next node in one step, replacing the need for separate conditional edges.
from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command
from galileo.handlers.langchain.handler import GalileoCallback
class State(TypedDict):
query: str
category: str
response: str
def classify(state: State) -> Command:
"""Classify the query and route to the appropriate handler."""
query = state["query"]
# In practice, you might use an LLM to classify the query
if "refund" in query.lower():
return Command(update={"category": "billing"}, goto="billing_handler")
return Command(update={"category": "general"}, goto="general_handler")
def billing_handler(state: State) -> dict:
return {"response": f"Billing support for: {state['query']}"}
def general_handler(state: State) -> dict:
return {"response": f"General support for: {state['query']}"}
# Build the graph
graph = StateGraph(State)
graph.add_node("classify", classify)
graph.add_node("billing_handler", billing_handler)
graph.add_node("general_handler", general_handler)
graph.add_edge(START, "classify")
graph.add_edge("billing_handler", END)
graph.add_edge("general_handler", END)
app = graph.compile()
# Run with Galileo logging — Command routing is captured automatically
galileo_callback = GalileoCallback(project="langgraph-command-example")
result = app.invoke(
{"query": "I need a refund for my last order"},
config={"callbacks": [galileo_callback]},
)
print(result["response"])
In this example, the classify node returns a Command that sets the category field and routes to either billing_handler or general_handler. Galileo captures the full Command return value, including the routing decision.
Using Send
Send enables fan-out patterns where multiple instances of a node run in parallel, each with different input state.
import operator
from typing import Annotated, TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END
from langgraph.types import Send
from galileo.handlers.langchain.handler import GalileoCallback
class OverallState(TypedDict):
topics: list[str]
summaries: Annotated[list[str], operator.add]
class SummaryState(TypedDict):
topic: str
summaries: Annotated[list[str], operator.add]
def fan_out(state: OverallState) -> list[Send]:
"""Dispatch each topic to a summarize node in parallel."""
return [Send("summarize", {"topic": t}) for t in state["topics"]]
def summarize(state: SummaryState) -> dict:
"""Summarize a single topic."""
# In practice, you would call an LLM here
return {"summaries": [f"Summary of {state['topic']}"]}
# Build the graph
graph = StateGraph(OverallState)
graph.add_node("summarize", summarize)
graph.add_conditional_edges(START, fan_out)
graph.add_edge("summarize", END)
app = graph.compile()
# Run with Galileo logging — each Send is captured automatically
galileo_callback = GalileoCallback(project="langgraph-send-example")
result = app.invoke(
{"topics": ["AI safety", "Quantum computing", "Climate change"]},
config={"callbacks": [galileo_callback]},
)
print(result["summaries"])
Here, fan_out returns a list of Send objects — one per topic. LangGraph runs the summarize node for each topic in parallel, and the results are aggregated via the operator.add reducer. Galileo logs each parallel execution.
When a tool returns a Command, it can include a ToolMessage in the state update. Galileo automatically extracts the ToolMessage for proper tool-call logging.
from typing import Any
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool
from langgraph.types import Command
@tool
def lookup_order(order_id: str) -> Command:
"""Look up an order by ID and return the result as a Command."""
# Simulate an order lookup
order_data = {"order_id": order_id, "status": "shipped", "total": 49.99}
# Return a Command that includes a ToolMessage in the state update.
# Galileo automatically extracts the ToolMessage for proper logging.
return Command(
update={
"messages": [
ToolMessage(
content=f"Order {order_id}: status={order_data['status']}, total=${order_data['total']}",
tool_call_id="placeholder", # LangGraph fills this automatically
)
]
}
)
This pattern is useful when a tool needs to both return a result to the LLM (via ToolMessage) and update other parts of the graph state at the same time.
What gets logged
Galileo serializes Command and Send objects and captures their fields in the trace:
Command fields
| Field | Description |
|---|
update | The state update dictionary applied by the command |
goto | The target node(s) the command routes to |
graph | The subgraph to send the command to (if applicable) |
resume | The resume value for interrupt-based workflows |
Send fields
| Field | Description |
|---|
node | The target node to dispatch work to |
arg | The custom state passed to the target node |
All fields are serialized recursively, including nested messages and complex objects.
Next steps