---
id: "action-unstructured-input"
type: "action-item"
source_timestamps: ["00:05:41", "00:06:50"]
tags: ["prompting", "efficiency"]
related: ["concept-contribution-badge", "concept-progressive-intent-discovery", "claim-premature-structure-fails"]
speakers: ["Nate B. Jones"]
action: "Feed raw, unstructured thoughts to modern LLMs instead of spending hours formatting a perfect specification document."
outcome: "Saves hours of human pre-work and leverages the LLM's ability to discover intent progressively."
sources: ["s25-builders-identity-shift"]
sourceVaultSlug: "s25-builders-identity-shift"
originDay: 25
---
# Feed Unstructured Input

## Action
Feed raw, unstructured thoughts to modern LLMs instead of spending hours formatting a perfect specification document.

## Why
Stop trying to do the AI's job before you even talk to it. Modern frontier models exhibit [[concept-progressive-intent-discovery]] — they parse messy input and iteratively discover what you actually want.

Clinging to elaborate pre-structuring is the [[concept-contribution-badge]] in action: a legacy behavior driven by ego, not productivity. See [[claim-premature-structure-fails]].

## Concrete Steps
1. **Suppress the urge to pre-format.** Notice when you're opening a Google Doc to draft a 'proper spec' — that's the contribution badge talking.
2. **Bring raw context directly to the model.** Half-baked ideas, conflicting goals, partial information — all welcome.
3. **Let the model ask questions.** Use its progressive intent discovery to refine the problem interactively.
4. **Iterate.** The first exchange is a probe, not a spec.

## When to Override This Default
The enrichment overlay flags that for brittle production pipelines or weaker models, more structure may still help. Treat unstructured input as the *default* with frontier models — not a universal law.

## Outcome
Saves hours of human pre-work and leverages the LLM's ability to discover intent progressively.
