Authorizations
Body
application/json
Response
Successful Response
Response for synthetic dataset extension requests.
ℹ️ These docs are for the v2.0 version of Galileo. Documentation for v1.0 version can be found here.
curl --request POST \
--url https://api.galileo.ai/v2/datasets/extend \
--header 'Content-Type: application/json' \
--header 'Galileo-API-Key: <api-key>' \
--data '{
"prompt_settings": {
"logprobs": true,
"top_logprobs": 5,
"echo": false,
"n": 1,
"reasoning_effort": "medium",
"verbosity": "medium",
"deployment_name": "<string>",
"model_alias": "GPT-4o",
"temperature": 1,
"max_tokens": 1024,
"stop_sequences": [
"<string>"
],
"top_p": 1,
"top_k": 40,
"frequency_penalty": 0,
"presence_penalty": 0,
"tools": [
{}
],
"tool_choice": "<string>",
"response_format": {},
"input": "<string>",
"instructions": "<string>",
"known_models": [
{
"name": "<string>",
"alias": "<string>",
"integration": "openai",
"user_role": "<string>",
"assistant_role": "<string>",
"system_supported": false,
"alternative_names": [
"<string>"
],
"input_token_limit": 123,
"output_token_limit": 123,
"token_limit": 123,
"output_price": 0,
"input_price": 0,
"cost_by": "tokens",
"is_chat": false,
"provides_log_probs": false,
"reasoning_supported": false,
"formatting_tokens": 0,
"response_prefix_tokens": 0,
"api_version": "<string>",
"params_map": {
"model": "<string>",
"temperature": "<string>",
"max_tokens": "<string>",
"stop_sequences": "<string>",
"top_p": "<string>",
"top_k": "<string>",
"frequency_penalty": "<string>",
"presence_penalty": "<string>",
"echo": "<string>",
"logprobs": "<string>",
"top_logprobs": "<string>",
"n": "<string>",
"api_version": "<string>",
"tools": "<string>",
"tool_choice": "<string>",
"response_format": "<string>",
"reasoning_effort": "<string>",
"verbosity": "<string>",
"input": "<string>",
"instructions": "<string>",
"deployment_name": "<string>"
},
"output_map": {
"response": "<string>",
"token_count": "<string>",
"input_token_count": "<string>",
"output_token_count": "<string>",
"completion_reason": "<string>"
},
"input_map": {
"prompt": "<string>",
"prefix": "",
"suffix": ""
}
}
]
},
"prompt": "<string>",
"instructions": "<string>",
"examples": [
"<string>"
],
"source_dataset": {
"dataset_id": "<string>",
"dataset_version_index": 123,
"row_ids": [
"<string>"
]
},
"data_types": [
"General Query"
],
"count": 10
}'
{
"dataset_id": "<string>"
}
Extends the dataset content
curl --request POST \
--url https://api.galileo.ai/v2/datasets/extend \
--header 'Content-Type: application/json' \
--header 'Galileo-API-Key: <api-key>' \
--data '{
"prompt_settings": {
"logprobs": true,
"top_logprobs": 5,
"echo": false,
"n": 1,
"reasoning_effort": "medium",
"verbosity": "medium",
"deployment_name": "<string>",
"model_alias": "GPT-4o",
"temperature": 1,
"max_tokens": 1024,
"stop_sequences": [
"<string>"
],
"top_p": 1,
"top_k": 40,
"frequency_penalty": 0,
"presence_penalty": 0,
"tools": [
{}
],
"tool_choice": "<string>",
"response_format": {},
"input": "<string>",
"instructions": "<string>",
"known_models": [
{
"name": "<string>",
"alias": "<string>",
"integration": "openai",
"user_role": "<string>",
"assistant_role": "<string>",
"system_supported": false,
"alternative_names": [
"<string>"
],
"input_token_limit": 123,
"output_token_limit": 123,
"token_limit": 123,
"output_price": 0,
"input_price": 0,
"cost_by": "tokens",
"is_chat": false,
"provides_log_probs": false,
"reasoning_supported": false,
"formatting_tokens": 0,
"response_prefix_tokens": 0,
"api_version": "<string>",
"params_map": {
"model": "<string>",
"temperature": "<string>",
"max_tokens": "<string>",
"stop_sequences": "<string>",
"top_p": "<string>",
"top_k": "<string>",
"frequency_penalty": "<string>",
"presence_penalty": "<string>",
"echo": "<string>",
"logprobs": "<string>",
"top_logprobs": "<string>",
"n": "<string>",
"api_version": "<string>",
"tools": "<string>",
"tool_choice": "<string>",
"response_format": "<string>",
"reasoning_effort": "<string>",
"verbosity": "<string>",
"input": "<string>",
"instructions": "<string>",
"deployment_name": "<string>"
},
"output_map": {
"response": "<string>",
"token_count": "<string>",
"input_token_count": "<string>",
"output_token_count": "<string>",
"completion_reason": "<string>"
},
"input_map": {
"prompt": "<string>",
"prefix": "",
"suffix": ""
}
}
]
},
"prompt": "<string>",
"instructions": "<string>",
"examples": [
"<string>"
],
"source_dataset": {
"dataset_id": "<string>",
"dataset_version_index": 123,
"row_ids": [
"<string>"
]
},
"data_types": [
"General Query"
],
"count": 10
}'
{
"dataset_id": "<string>"
}
Request for a synthetic dataset run job.
Only the model is used.
Show child attributes
Configuration for dataset examples in synthetic data generation.
Show child attributes
Show child attributes
Successful Response
Response for synthetic dataset extension requests.
Was this page helpful?