---
id: "claim-linear-skills-brittle"
type: "claim"
source_timestamps: ["10:55:00", "11:00:00"]
tags: ["prompt-engineering"]
related: ["concept-methodology-body", "contrarian-linear-steps-fail", "framework-skill-methodology"]
confidence: "high"
testable: true
speakers: ["Nate B. Jones"]
sources: ["s43-file-format-agreement"]
sourceVaultSlug: "s43-file-format-agreement"
originDay: 43
---
# Skills with only linear procedures are brittle

## Claim

Providing an LLM with **only** a rigid, step-by-step procedure creates a brittle skill.

## Body

If the LLM encounters an edge case not explicitly covered in the steps, it will likely fail. Providing **reasoning** (frameworks and principles) allows the model to generalize and handle unexpected inputs. This is the foundation of the [[framework-skill-methodology]] — see [[concept-methodology-body]].

## Confidence: High · Testable: Yes

## Validation (Enrichment)

Strongly supported. Linear step-by-step prompts fail on edge cases due to LLMs' probabilistic nature, as shown in legal-reasoning benchmarks where chain-of-thought (CoT) reasoning frameworks outperform rigid procedures by enabling generalization. Refinement via reasoning frameworks improves reliability by 20–30% in agent evals.

## Counter-Perspective

Even with reasoning frameworks, LLMs still struggle with fact-grounding and synthesis (e.g., legal evals show <50% accuracy on evidence validation). The claim — that reasoning frameworks beat linear steps — holds, but reasoning frameworks alone are not sufficient for high-stakes accuracy; hybrid neuro-symbolic approaches may be required (see [[concept-hard-wiring-vs-skills]]).

## Related

- [[contrarian-linear-steps-fail]]
