Getting Started with Galileo
Welcome to Galileo! This quickstart guide will walk you through setting up your first AI evaluation in minutes. You’ll learn how to identify and fix common issues in AI responses using Galileo’s powerful metrics and insights.
What You’ll Learn
- Set up and run an AI evaluation with Galileo in less than 5 minutes
- Interpret key metrics to identify response quality issues
- Apply prompt engineering techniques to fix common AI response problems
- Understand how Galileo helps you build more reliable AI applications
Install Dependencies
Install the Galileo package:
Set Up Environment Variables
Create a .env
file in the project directory and add the following credentials:
Create a Project Directory
Create a project directory and add the following files:
Application Code
Run the Application
To run this simple application, simply run the following:
Analyze the results
Check your terminal for the output or head over to the Galileo console to review the run trace and metrics.
Newton's First Law, often referred to as the Law of Inertia, states that an object will remain at rest, or in uniform motion in a straight line, unless acted upon by a net external force. This means that if an object is not influenced by any external forces, it will maintain its current state of motion. Essentially, this law emphasizes the concept of inertia, which is the natural tendency of objects to resist changes in their motion. It forms the foundation for classical mechanics, outlining the behavior of objects when forces are not in play.
Fixing prompt issues
If you examine the results we got for our first run, you’ll see that the model’s response is not exactly what we asked for. We’re using to check how well our model follows directions.
What Happened?
- We asked for a succinct explanation.
- The model gave a detailed answer instead. 😢
- Our instruction adherence metric was 0.6667%, meaning we need to tweak our prompt.
To understand why our instruction adherence metric was so low we can look at the metric explanation. You can find this explanation when hovering over the LLM span in your trace.
The instruction provided was to 'Explain the following topic succinctly: Newton's first law'. The response begins by defining Newton's First Law and provides a clear explanation of the concept of inertia. However, the response is lengthy and provides more detail than the word 'succinctly' implies. While it does effectively cover the essence of the topic, it could be more concise to align better with the instruction. Thus, while informative, the response does not fully adhere to the request for a succinct explanation.
This explanation correctly points out that the answer we got wasn’t exactly succinct. So, let’s modify our prompt to fix this. We’ll make sure to explain what succinctness means for us:
Running this again, our results will look much more concise:
Now, our instruction adherence metric jumps to 1! 🎉
What’s Next
Now that you’ve completed your first evaluation, explore these resources to build better AI applications:
- SDKs: Integrate Galileo with Python or TypeScript
- Application Guides: Optimize Conversational AI, RAG Systems, or AI Agents
- Advanced Features: Run Experiments, create Custom Metrics, and detect Failure Modes
Continue your journey with our comprehensive How-to Guides.