Semantic Conventions
Galileo supports two complementary semantic convention standards:- OpenTelemetry GenAI Semantic Conventions — Semantic Conventions for GenAI agent and framework spans
- OpenInference — OpenInference Semantic Conventions
Minimum Requirements for Valid Spans
For a span to be considered valid, it must include certain required attributes depending on the span type.Agent Spans
| Attribute | Requirement Level | Description | Example |
|---|---|---|---|
gen_ai.operation.name | Required | The name of the operation being performed | invoke_agent, create_agent |
gen_ai.provider.name | Required | The Generative AI provider | openai, anthropic, gcp.vertex_ai |
| Span name | Required | Should follow format: invoke_agent {gen_ai.agent.name} or invoke_agent | invoke_agent Math Tutor |
| Span kind | Required | Should be CLIENT for remote agents or INTERNAL for in-process agents | CLIENT, INTERNAL |
| Input | Required | Input messages or data. Use gen_ai.input.messages (OpenTelemetry) or input.value (OpenInference) | [{"role": "user", "content": "..."}] |
| Output | Required | Output messages or data. Use gen_ai.output.messages (OpenTelemetry) or output.value (OpenInference) | [{"role": "assistant", "content": "..."}] |
LLM Spans
| Attribute | Requirement Level | Description | Example |
|---|---|---|---|
gen_ai.operation.name | Required | The name of the operation | chat, text_completion, embeddings |
gen_ai.provider.name | Required | The Generative AI provider | openai, anthropic |
gen_ai.request.model | Conditionally Required | The name of the GenAI model | gpt-4, claude-3-opus |
| Input | Required | Input messages or prompts. Use gen_ai.input.messages (OpenTelemetry) or llm.input_messages (OpenInference) | [{"role": "user", "content": "..."}] |
| Output | Required | Output messages or completions. Use gen_ai.output.messages (OpenTelemetry) or llm.output_messages (OpenInference) | [{"role": "assistant", "content": "..."}] |
Tool Execution Spans
| Attribute | Requirement Level | Description | Example |
|---|---|---|---|
gen_ai.operation.name | Required | Should be execute_tool | execute_tool |
tool.name (OpenInference) | Required | Name of the tool being used | get_weather, calculate |
| Input | Required | Tool call arguments. Use gen_ai.tool.call.arguments (OpenTelemetry) or input.value (OpenInference) | {"location": "NYC", "unit": "fahrenheit"} |
| Output | Required | Tool call result. Use gen_ai.tool.call.result (OpenTelemetry) or output.value (OpenInference) | {"temperature": 72, "condition": "sunny"} |
Retriever Spans
| Attribute | Requirement Level | Description | Example |
|---|---|---|---|
db.operation | Required | Database operation type. Should be query or search | query, search |
openinference.span.kind (OpenInference) | Conditionally Required | Should be retriever when using OpenInference | retriever |
| Input | Required | Query string or search input. Use gen_ai.input.messages (OpenTelemetry) or input.value (OpenInference) | "What is machine learning?" |
| Output | Required | Retrieved documents. Use gen_ai.output.messages with document list (OpenTelemetry) or retrieval.documents (OpenInference) | [{"id": "doc1", "content": "..."}, {"id": "doc2", "content": "..."}] |
Retriever spans are typically detected automatically when
db.operation is set to query or search. The output should be a list of documents, which will be formatted appropriately by Galileo’s OTLP provider.Error Handling
For spans that end in an error, you must include:error.type— Describes the class of error (e.g.,timeout,500, exception name)- Span status — Should be set to
ERRORwith an appropriate error description
Python
Direct POST Calls to the OTLP Endpoint
You can make direct POST calls to the Galileo OTLP endpoint to send OTLP packets. This is useful for custom integrations or when you need to send pre-generated OTLP data.Endpoint
Headers
The following headers are required:| Header | Description |
|---|---|
Galileo-API-Key | Your Galileo API key |
project or projectid | Project name or project ID (at least one required) |
logstream or logstreamid | Log stream name or Log stream ID (at least one required) |
Content-Type | Must be application/x-protobuf |
Request Body
The request body should contain OTLP packets in protobuf format (binary). The payload should be anExportTraceServiceRequest message as defined in the OpenTelemetry Protocol specification.
Response Codes
200 OK
200 OK
The request was successfully processed. The response body contains an Even when the HTTP status is 200, check for
ExportTraceServiceResponse in JSON format.If some spans were rejected, the response includes partial success information:partialSuccess in the response to determine if all spans were successfully processed.401 Unauthorized
401 Unauthorized
404 Not Found
404 Not Found
The specified project was not found and could not be created.
415 Unsupported Media Type
415 Unsupported Media Type
The
Content-Type header is not application/x-protobuf.422 Unprocessable Entity
422 Unprocessable Entity
The request could not be processed. Common reasons include no spans found, trace processing failure, or missing Log stream ID.
Additional Resources
OpenTelemetry GenAI Conventions
Official OpenTelemetry semantic conventions for GenAI agent spans.
OpenInference Conventions
OpenInference semantic conventions reference.
OpenTelemetry Integration
Set up OpenTelemetry and OpenInference with Galileo.
Log with OTel and LangGraph
Step-by-step guide using OpenTelemetry with LangGraph and OpenAI.