Datadog Labs/dd-llmo-eval-bootstrap — Agent Skills | officialskills.sh
Back to skills

dd-llmo-eval-bootstrap

communitytesting

Analyzes production LLM traces from Datadog and generates ready-to-use evaluator code using the Datadog Evals SDK.

Setup & Installation

npx skills add https://github.com/datadog-labs/agent-skills --skill dd-llmo-eval-bootstrap
or paste the link and ask your coding assistant to install it
https://github.com/datadog-labs/agent-skills/tree/main/dd-llmo/eval-bootstrap
View on GitHub

What This Skill Does

Analyzes production LLM traces from Datadog and generates ready-to-use evaluator code using the Datadog Evals SDK. It samples real traffic, identifies quality dimensions worth measuring, and outputs BaseEvaluator subclasses or LLMJudge instances you can plug into LLM Experiments.

Writing evaluators from scratch means guessing what quality dimensions matter, but this skill samples actual production traces and proposes grounded evals with evidence, so you start from real behavior instead of assumptions.

When to use it

  • Generating evaluators for a RAG app after noticing answer quality drift
  • Building a test suite from production traces of a customer support chatbot
  • Creating LLM judge prompts grounded in real failure patterns from an RCA report
  • Auditing an agent app for scope violations using trace-based safety evals
  • Bootstrapping format and correctness checks for a new ml_app with no existing coverage