ℹ️ These docs are for the v2.0 version of Galileo. Documentation for v1.0 version can be found here.
curl --request POST \
--url https://api.galileo.ai/v2/scorers/{scorer_id}/version/code \
--header 'Content-Type: multipart/form-data' \
--header 'Galileo-API-Key: <api-key>' \
--form file='@example-file' \
--form 'validation_result=<string>'{
"id": "<string>",
"version": 123,
"scorer_id": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"generated_scorer": {
"id": "<string>",
"name": "<string>",
"chain_poll_template": {
"template": "<string>",
"metric_system_prompt": "<string>",
"metric_description": "<string>",
"value_field_name": "rating",
"explanation_field_name": "explanation",
"metric_few_shot_examples": [
{
"generation_prompt_and_response": "<string>",
"evaluating_response": "<string>"
}
],
"response_schema": {}
},
"created_by": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"scoreable_node_types": [
"chain"
],
"scorer_configuration": {
"model_alias": "gpt-4.1-mini",
"num_judges": 3,
"output_type": "boolean",
"scoreable_node_types": [
"<string>"
],
"cot_enabled": false,
"ground_truth": false
},
"instructions": "<string>",
"user_prompt": "<string>"
},
"registered_scorer": {
"id": "<string>",
"name": "<string>",
"score_type": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"created_by": "<string>",
"data_type": "unknown",
"scoreable_node_types": [
"<string>"
]
},
"finetuned_scorer": {
"id": "<string>",
"name": "<string>",
"lora_task_id": 123,
"prompt": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"created_by": "<string>",
"luna_input_type": "span",
"luna_output_type": "float",
"class_name_to_vocab_ix": {},
"executor": "action_completion_luna"
},
"model_name": "<string>",
"num_judges": 123,
"scoreable_node_types": [
"<string>"
],
"cot_enabled": true,
"output_type": "boolean",
"input_type": "basic",
"chain_poll_template": {
"template": "<string>",
"metric_system_prompt": "<string>",
"metric_description": "<string>",
"value_field_name": "rating",
"explanation_field_name": "explanation",
"metric_few_shot_examples": [
{
"generation_prompt_and_response": "<string>",
"evaluating_response": "<string>"
}
],
"response_schema": {}
},
"allowed_model": true
}curl --request POST \
--url https://api.galileo.ai/v2/scorers/{scorer_id}/version/code \
--header 'Content-Type: multipart/form-data' \
--header 'Galileo-API-Key: <api-key>' \
--form file='@example-file' \
--form 'validation_result=<string>'{
"id": "<string>",
"version": 123,
"scorer_id": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"generated_scorer": {
"id": "<string>",
"name": "<string>",
"chain_poll_template": {
"template": "<string>",
"metric_system_prompt": "<string>",
"metric_description": "<string>",
"value_field_name": "rating",
"explanation_field_name": "explanation",
"metric_few_shot_examples": [
{
"generation_prompt_and_response": "<string>",
"evaluating_response": "<string>"
}
],
"response_schema": {}
},
"created_by": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"scoreable_node_types": [
"chain"
],
"scorer_configuration": {
"model_alias": "gpt-4.1-mini",
"num_judges": 3,
"output_type": "boolean",
"scoreable_node_types": [
"<string>"
],
"cot_enabled": false,
"ground_truth": false
},
"instructions": "<string>",
"user_prompt": "<string>"
},
"registered_scorer": {
"id": "<string>",
"name": "<string>",
"score_type": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"created_by": "<string>",
"data_type": "unknown",
"scoreable_node_types": [
"<string>"
]
},
"finetuned_scorer": {
"id": "<string>",
"name": "<string>",
"lora_task_id": 123,
"prompt": "<string>",
"created_at": "2023-11-07T05:31:56Z",
"updated_at": "2023-11-07T05:31:56Z",
"created_by": "<string>",
"luna_input_type": "span",
"luna_output_type": "float",
"class_name_to_vocab_ix": {},
"executor": "action_completion_luna"
},
"model_name": "<string>",
"num_judges": 123,
"scoreable_node_types": [
"<string>"
],
"cot_enabled": true,
"output_type": "boolean",
"input_type": "basic",
"chain_poll_template": {
"template": "<string>",
"metric_system_prompt": "<string>",
"metric_description": "<string>",
"value_field_name": "rating",
"explanation_field_name": "explanation",
"metric_few_shot_examples": [
{
"generation_prompt_and_response": "<string>",
"evaluating_response": "<string>"
}
],
"response_schema": {}
},
"allowed_model": true
}Successful Response
Show child attributes
Template for a chainpoll metric prompt, containing all the info necessary to send a chainpoll prompt.
Show child attributes
Chainpoll prompt template.
System prompt for the metric.
Description of what the metric should do.
Field name to look for in the chainpoll response, for the rating.
Field name to look for in the chainpoll response, for the explanation.
Response schema for the output
chain, chat, llm, retriever, tool, agent, workflow, trace, session Show child attributes
1 <= x <= 10Output type of the generated scorer.
boolean, categorical, count, discrete, freeform, percentage, multilabel Types of nodes that can be scored by this scorer.
Whether chain of thought is enabled for this scorer.
Whether ground truth is enabled for this scorer.
Show child attributes
unknown, text, label, floating_point, integer, timestamp, milli_seconds, boolean, uuid, percentage, dollars, array, template_label, thumb_rating_percentage, user_id, text_offsets, segments, hallucination_segments, thumb_rating, score_rating, star_rating, tags_rating, thumb_rating_aggregate, score_rating_aggregate, star_rating_aggregate, tags_rating_aggregate Show child attributes
span, trace_object, trace_input_output_only float, string, string_list Executor pipeline. Defaults to finetuned scorer pipeline but can run custom galileo score pipelines.
action_completion_luna, action_advancement_luna, agentic_session_success, agentic_session_success, agentic_workflow_success, agentic_workflow_success, agent_efficiency, agent_flow, bleu, chunk_attribution_utilization_luna, chunk_attribution_utilization, completeness_luna, completeness, context_adherence, context_adherence_luna, context_relevance, context_relevance_luna, conversation_quality, correctness, ground_truth_adherence, input_pii, input_pii_gpt, input_sexist, input_sexist, input_sexist_luna, input_sexist_luna, input_tone, input_tone_gpt, input_toxicity, input_toxicity_luna, instruction_adherence, output_pii, output_pii_gpt, output_sexist, output_sexist, output_sexist_luna, output_sexist_luna, output_tone, output_tone_gpt, output_toxicity, output_toxicity_luna, prompt_injection, prompt_injection_luna, prompt_perplexity, rouge, tool_error_rate, tool_error_rate_luna, tool_selection_quality, tool_selection_quality_luna, uncertainty, user_intent_change Enumeration of output types.
boolean, categorical, count, discrete, freeform, percentage, multilabel What type of input to use for model-based scorers (sessions_normalized, trace_io_only, etc.).
basic, llm_spans, retriever_spans, sessions_normalized, sessions_trace_io_only, tool_spans, trace_input_only, trace_io_only, trace_normalized, trace_output_only, agent_spans, workflow_spans Template for a chainpoll metric prompt, containing all the info necessary to send a chainpoll prompt.
Show child attributes
Chainpoll prompt template.
System prompt for the metric.
Description of what the metric should do.
Field name to look for in the chainpoll response, for the rating.
Field name to look for in the chainpoll response, for the explanation.
Response schema for the output
Was this page helpful?