When a model ignores instructions, it generates responses that deviate from the provided prompt structure, missing key details or adding extraneous information. This can lead to misleading, unhelpful, or non-compliant outputs.

Setup

In this example, we’ll use the gpt-4o model to generate a response to for a simple prompt. We’ll enable the instruction_adherence metric to monitor the model’s adherence to the prompt.

import os
from galileo.openai import openai # The Galileo OpenAI client wrapper is all you need!
from dotenv import load_dotenv
load_dotenv()

client = openai.OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))

prompt = f"Explain the following topic succinctly: Newton's First Law"
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "system", "content": prompt}],
)
print(response.choices[0].message.content.strip())

What Showed Up in Metrics:

When we examine the Instruction Adherence metric, we see a score of 0.6667:

That means the model didn’t fully follow the instructions in the prompt. To understand why Galileo flagged this, we can review the explanation:

As shown in the dashboard above, the instruction adherence score is low, indicating that our prompt wasn’t followed correctly.

Metric Explanation

The instruction provided was to 'Explain the following topic succinctly: Newton's first law'. The response begins by defining Newton's First Law and provides a clear explanation of the concept of inertia. However, the response is lengthy and provides more detail than the word 'succinctly' implies. While it does effectively cover the essence of the topic, it could be more concise to align better with the instruction. Thus, while informative, the response does not fully adhere to the request for a succinct explanation.

What Went Wrong?

As the explanation indicates, the main reason the model did not follow the instructions is that the prompt is too vague. The prompt does not provide enough information about what constitutes a “succinct” explanation.

The Solution

In order to fix this, we can modify the prompt to provide more specific instructions:

prompt = """
	1.	Explain Newton's First Law in one sentence of no more than fifteen (15) words.
	2.	Do not add any additional sentences, examples, parentheses, bullet points, or further clarifications.
	3.	Your answer must be exactly one sentence and must not exceed 15 words.
"""

In this updated prompt, we gave the model more specific instructions about what constitutes a “succinct” explanation. We worded the prompt in three distinct variations to eliminate any ambiguity.

If we run the application again, we see that the model now follows the instructions more closely:

An object remains at rest or in uniform motion unless acted upon by a net force.

Now, our instruction adherence metric jumps to 100.00%!

As demonstrated in the metrics dashboard above, our optimized prompt with specific, unambiguous instructions resulted in perfect adherence to our requirements. The model now produces exactly what we asked for: a concise, single-sentence explanation within the specified word limit.

By monitoring instruction adherence and iteratively improving your prompts based on the metrics, you can ensure your AI models consistently follow instructions as intended.