Grow the Verifier, Not the Prompt: Run Production with 10 Golden Cases
The moment you put an AI agent into a real workflow, two realities show up fast: Models and prompts wobble (updates, infra, tools, inputs) Most failures are “missing grounds,” not “rule violations....

Source: DEV Community
The moment you put an AI agent into a real workflow, two realities show up fast: Models and prompts wobble (updates, infra, tools, inputs) Most failures are “missing grounds,” not “rule violations.” In the previous posts, we fixed the division of labor: LLM generates a proposal (a plan) Verifier deterministically returns ACCEPT / REJECT / DEGRADE, and may normalize the plan Executor runs Typed Actions only (dry-run → approval → production) Now the real question: What do you “grow” so the system doesn’t collapse in ops? My answer is simple: Don’t start by tuning the prompt. Start by freezing 10 golden cases for the verifier. 0) The premise (re-stated) LLMs are probabilistic. Output variance is not evil. What’s evil is executing variance. Also: LLM-generated “explanations” (including chain-of-thought-like text) are not audit-grade grounds. What you should pin is: input schema (what counts as admissible grounds) policy_id / policy_version deterministic rule-evaluation logs evidence / trac