---
id: "contrarian-linear-steps-fail"
type: "contrarian-insight"
source_timestamps: ["10:55:00", "11:05:00"]
tags: ["prompt-engineering"]
related: ["concept-methodology-body", "claim-linear-skills-brittle", "framework-skill-methodology"]
challenges: "The common advice to give LLMs highly specific, linear 'step 1, step 2' procedures. This actually limits the model's ability to reason through edge cases."
sources: ["s43-file-format-agreement"]
sourceVaultSlug: "s43-file-format-agreement"
originDay: 43
---
# Step-by-step instructions make skills brittle

## Contrarian Position

Writing LLM instructions as **linear step-by-step procedures** makes the resulting skill brittle, not robust.

## What It Challenges

The widespread *prompt engineering* advice that says: *"Give the LLM very specific numbered steps to follow."*

## Speaker's Argument

Linear procedures cover only the happy path. The moment the input deviates, the LLM has no framework to reason from and falls back to hallucination. Replacing rigid steps with **frameworks, principles, and quality criteria** lets the model generalize. See [[claim-linear-skills-brittle]] and [[concept-methodology-body]].

## Steelman of the Conventional View

For *narrow, deterministic* tasks, linear steps are predictable and easy to debug.

## Reconciliation

The contrarian and conventional views collapse: for genuinely deterministic logic, use a **script** (see [[concept-hard-wiring-vs-skills]]). For genuinely judgment-heavy tasks where you would have written linear steps anyway, switch to reasoning-first methodology per the [[framework-skill-methodology]].
