Metrics Objects

class Metrics(BaseClientModel)

create_custom_llm_metric

def create_custom_llm_metric(
    name: str,
    user_prompt: str,
    node_level: StepType = StepType.llm,
    cot_enabled: bool = True,
    model_name: str = "gpt-4.1-mini",
    num_judges: int = 3,
    description: str = "",
    tags: list[str] = [],
    output_type: OutputTypeEnum = OutputTypeEnum.BOOLEAN
) -> BaseScorerVersionResponse
Create a custom LLM metric. Arguments:
  • name (str): Name of the metric.
  • user_prompt (str): User prompt for the metric.
  • node_level (StepType): Node level for the metric.
  • cot_enabled (bool): Whether chain-of-thought is enabled.
  • model_name (str): Model name to use.
  • str0 (str1): Number of judges for the metric.
  • str2 (str): Description of the metric.
  • str4 (str5): Tags associated with the metric.
  • str6 (str7): Output type for the metric.
Returns: str8: Response containing the created metric details.

create_custom_llm_metric

def create_custom_llm_metric(
    name: str,
    user_prompt: str,
    node_level: StepType = StepType.llm,
    cot_enabled: bool = True,
    model_name: str = "gpt-4.1-mini",
    num_judges: int = 3,
    description: str = "",
    tags: list[str] = [],
    output_type: OutputTypeEnum = OutputTypeEnum.BOOLEAN
) -> BaseScorerVersionResponse
Create a custom LLM metric. Arguments:
  • name (str): Name of the metric.
  • user_prompt (str): User prompt for the metric.
  • node_level (StepType): Node level for the metric.
  • cot_enabled (bool): Whether chain-of-thought is enabled.
  • model_name (str): Model name to use.
  • str0 (str1): Number of judges for the metric.
  • str2 (str): Description of the metric.
  • str4 (str5): Tags associated with the metric.
  • str6 (str7): Output type for the metric.
Returns: str8: Response containing the created metric details.