---
id: "action-document-edge-cases"
type: "action-item"
source_timestamps: ["11:25:00", "11:35:00"]
tags: ["prompt-engineering", "robustness"]
related: ["concept-methodology-body", "framework-skill-methodology", "claim-agents-lack-recovery"]
speakers: ["Nate B. Jones"]
outcome: "Prevents the agent from failing or hallucinating when it encounters scenarios outside the 'happy path'."
sources: ["s43-file-format-agreement"]
sourceVaultSlug: "s43-file-format-agreement"
originDay: 43
---
# Explicitly document edge cases

## Action

Write down the **exceptions and nuances** that a human would handle via common sense within the skill methodology.

## Why

Agents lack recovery loops (see [[claim-agents-lack-recovery]]). If you don't enumerate edge cases, the LLM will guess — often wrongly — and a downstream agent will compound the error.

## Outcome

Prevents the agent from failing or hallucinating when it encounters scenarios outside the *happy path*.

## How

- Maintain an **edge-case log** alongside the skill.
- Each time a real-world failure surfaces, codify it as a new edge case in the skill body.
- This is component #3 of the [[framework-skill-methodology]].
