Get started with the multi-agent banking chatbot sample project powered by LangGraph, with RAG using Pinecone as a vector database
Summary The supervisor agent exhibits inconsistent behavior that undermines the multi-agent system’s effectiveness. In a credit score inquiry, the supervisor correctly identified the query type and transferred it to the credit-score-agent, which successfully retrieved the user’s credit score (550) and provided helpful context about the score’s meaning. However, when control returned to the supervisor, it responded with ‘I don’t know’ despite the specialist having successfully completed the task. This creates a frustrating user experience where the system retrieves the requested information but then claims ignorance, potentially making users think the system is broken or unreliable. Suggestions Ensure the supervisor agent properly processes and relays the results from specialist agents instead of defaulting to ‘I don’t know’ responses.To see how you can use these insights to improve the app, get the code and try some different agent prompts.
Open the integrations page
Add an integration
Clone the SDK examples repo
Navigate to the relevant project folder
after
. If you want to learn more about adding logging with Galileo to a LangGraph app, check out the add evaluations to a multi-agent LangGraph application cookbook.Configure environment variables
.env.example
file. Rename this file to .env
and populate the PINECONE_API_KEY
value. You can leave the other values for now as you will populate them laterUpload the documents
scripts
folder. Run this script to create a new index in Pinecone and upload the documents.Install required dependencies
Configure environment variables
.env
file, populate the Galileo values:Environment Variable | Value |
---|---|
GALILEO_API_KEY | Your API key |
GALILEO_PROJECT | The name of your Galileo project - this is preset to Multi-Agent Banking Chatbot |
GALILEO_LOG_STREAM | The name of your Log stream - this is preset to Default Log Stream |
GALILEO_CONSOLE_URL | Optional. The URL of your Galileo console for custom deployments. For the fre tier, you don’t need to set this. |
Environment Variable | Value |
---|---|
OPENAI_API_KEY | Your OpenAI API key. If you are using Ollama, set this to ollama . If you are using another OpenAI compatible API, then set this to the relevant API key. |
OPENAI_BASE_URL | Optional. The base URL of your OpenAI deployment. Leave this commented out if you are using the default OpenAI API. If you are using Ollama, set this to http://localhost:11434/v1 . If you are using another OpenAI compatible API, then set this to the relevant URL. |
MODEL_NAME | The name of the model you are using |
Run the project
Run the unit test
Evaluate the experiment
Try different supervisor agent prompts
Compare experiments