---
id: "concept-intent-engineering"
type: "concept"
source_timestamps: ["00:00:48", "00:06:17", "00:15:00"]
tags: ["ai-alignment", "organizational-design", "core-thesis"]
related: ["concept-context-engineering", "concept-prompt-engineering", "concept-machine-readable-okrs", "concept-unified-context-infrastructure", "claim-klarna-intent-failure", "claim-copilot-intent-failure", "claim-intent-race", "framework-intent-gap-layers"]
definition: "The discipline of translating implicit organizational goals, values, and tradeoffs into machine-readable, actionable parameters for autonomous AI agents."
sources: ["s24-prompt-engineering-dead"]
sourceVaultSlug: "s24-prompt-engineering-dead"
originDay: 24
---
# Intent Engineering

## Definition

**Intent Engineering** is the discipline of making organizational purpose — goals, values, tradeoffs, and decision boundaries — *machine-readable and machine-actionable*. It is the central thesis of this source.

## The Three-Discipline Hierarchy

Nate B. Jones (see [[entity-nate-b-jones]]) positions Intent Engineering as the third and most strategic discipline in the evolution of human-to-AI interface design:

| Discipline | Tells the AI… | Era | Scope |
|---|---|---|---|
| [[concept-prompt-engineering]] | *How* to format an output | 2022–2024 | Individual, synchronous |
| [[concept-context-engineering]] | *What* information to base it on | 2024–2025 | Pipeline / data architecture |
| **Intent Engineering** | What to *want* | 2026+ | Organizational / strategic |

While prompting is a personal skill and context engineering is an infrastructure problem, Intent Engineering is fundamentally an **organizational design** problem.

## Why It Matters

Without explicit intent encoding, highly capable autonomous systems will optimize for *easily measurable but strategically incorrect* metrics — pure resolution speed, raw cost savings, ticket throughput — and miss the nuanced tradeoffs that human employees absorb implicitly through company culture. The flagship cautionary tale is [[claim-klarna-intent-failure]]: an AI customer service deployment that was a runaway *metric* success and a strategic failure.

## Core Mechanics

Intent Engineering replaces prose-in-a-system-prompt with **structured, actionable parameters** encoded directly into infrastructure:

- Explicit tradeoff hierarchies (e.g., *customer satisfaction* outranks *resolution time* in scenarios X and Y).
- [[concept-machine-readable-okrs]] — translated objectives that agents can act on.
- Delegation frameworks tied to autonomy levels (see [[framework-deepmind-autonomy-levels]]).
- Resolution rules for when policy and signal disagree.

## Position in the Stack

Intent Engineering is Layer 3 of the [[framework-intent-gap-layers]], sitting on top of [[concept-unified-context-infrastructure]] (Layer 1) and the coherent AI worker toolkit (Layer 2). Skipping the lower layers makes Layer 3 impossible; skipping Layer 3 produces the Klarna and Copilot pathologies.

## Enrichment Note

The term **"Intent Engineering"** is not yet established in mainstream enterprise-AI literature (no canonical citations were found in adjacent research). Counter-perspectives argue the dominant root cause of AI pilot failure is data fragmentation, talent gaps, or change management — not an abstract intent gap. See [[claim-intent-race]] for the contested framing.

## Related Action Items

- [[action-translate-okrs]] — operational entry point for intent encoding.
- [[action-hire-workflow-architect]] — organizational owner for this layer.
- [[action-build-mcp-infrastructure]] — prerequisite plumbing.



## Related across days
- [[concept-spec-quality-bottleneck]]
- [[concept-specification-vs-execution]]
- [[claim-bottleneck-shift]]
- [[concept-can-it-carry]]
- [[concept-machine-readable-okrs]]
- [[arc-spec-and-intent-bottleneck]]
