---
id: "action-delete-procedural-prompts"
type: "action-item"
source_timestamps: ["00:05:50", "00:06:20"]
tags: ["prompt-engineering", "optimization"]
related: ["concept-outcome-driven-prompting", "claim-procedural-prompting-degrades"]
speakers: ["Nate B. Jones"]
action: "Audit existing prompts and delete procedural instructions (the 'how')."
outcome: "Reduced token consumption and improved model performance."
sources: ["s44-claude-mythos"]
sourceVaultSlug: "s44-claude-mythos"
originDay: 44
---
# Audit and Delete Procedural Prompts

## Action

**Audit existing prompts and aggressively delete procedural instructions (the 'how').**

## Why

Per [[claim-procedural-prompting-degrades]] and the [[concept-bitter-lesson-llms|Bitter Lesson]], any prompt that dictates *how* a model should accomplish a task — *"First do X, then do Y"* — bottlenecks frontier-model reasoning.

## How to execute

1. **Inventory** every prompt and system instruction in your stack.
2. **Classify** each as either:
   - *Outcome / constraint* — keep
   - *Procedural / 'how'* — flag for deletion
3. **Replace** procedural sections with concise statements of:
   - The desired outcome
   - Strict constraints (policies, formats, edge cases)
   - Available tools
4. **A/B test** old vs. new prompts on quality, latency, and token cost.
5. **Iterate** — see [[concept-outcome-driven-prompting]] for paradigm details.

## Expected outcome

- Reduced token consumption (often 50–80%)
- Improved model performance — model finds more efficient paths
- Cleaner prompt maintenance burden
- Readiness for [[concept-claude-mythos|Mythos]]-class models when they arrive

## Related

- Concept: [[concept-outcome-driven-prompting]]
- Framework step: [[framework-mythos-readiness]] step 2 ("Cut Complexity")
- Quote: [[quote-let-go]]
