- **Agent Discovery** — The missing infrastructure layer that would allow autonomous AI agents to find, vet, and transact with services across the internet. See [[concept-agent-discovery]].
- **Agent-Ready Business** — A business optimized for machine interaction (Fast, Easy, MCP-ready) rather than human marketing funnels. See [[concept-agent-ready-business]].
- **Agentic Economy** — Emerging paradigm where autonomous AI agents conduct transactions and workflows on behalf of humans. See [[concept-agentic-economy]].
- **AI Activity vs. AI Fluency** — Activity is high individual ChatGPT usage; fluency is shared, measurable organizational workflows. ~30% gains vs ~300% gains. See [[concept-ai-fluency-vs-activity]].
- **AI Workflow Architect** — Proposed organizational role spanning engineering, operations, and strategy; owns Layer 3 of the Intent Gap. See [[action-hire-workflow-architect]].
- **Archaeological Programming** — Reverse-engineering opaque AI-generated code; coined by [[entity-addy-osmani]]. See [[concept-archaeological-programming]].
- **Availability as a Quality Metric** — Uptime/reliability is a first-class quality dimension; an unreachable model is worthless. See [[concept-availability-as-quality]].
- **Build Layer Collapse** — Software production cost approaching zero as AI app builders commoditize creation. See [[concept-build-layer-collapse]].
- **Build Mode / Reflect Mode** — Discipline of separating high-velocity AI execution from analytical review. See [[concept-temporal-separation]].
- **Can It Carry?** — The replacement evaluation question for frontier AI: can the model sustain context, manage risk, and execute end-to-end? See [[concept-can-it-carry]].
- **Civil Engineering (in this context)** — Explicit, rule-based programming/prompting; the counterpart to QWAN.
- **Cognitive Architecture** — Systems thinking applied to orchestrating multiple AI agents; the new bottleneck per [[claim-bottleneck-shift]].
- **Confidently Incorrect** — Failure mode of agents producing wrong output that looks like success. See [[quote-managing-agents]].
- **Context Engineering** — Architecting the full information state an AI system operates within (vs. crafting individual prompts). See [[concept-context-engineering]].
- **Contribution Badge** — The legacy psychological need to over-structure prompts to feel ownership. See [[concept-contribution-badge]].
- **Curation Scarcity** — When supply is infinite, curation becomes the scarcest resource. See [[claim-curation-scarcest-resource]].
- **Dingo (Private Bench test)** — Executive-judgment test in [[framework-private-bench-suite]]: produce 23-deliverable launch packet for a fictional startup.
- **Engineering Manager Mindset** — Operational identity shift from IC to manager of AI agent teams. See [[concept-engineering-manager-mindset]].
- **Experiential Debt** — Builder's loss of mental model of their own product when AI bypasses creation friction. See [[concept-experiential-debt]].
- **Field of Dreams Fallacy** — \"If you build it they will come\" — refuted by [[contrarian-building-is-not-the-bottleneck]].
- **Fingertip Feel** — Intuitive ability to descend from architectural altitude to specific code when turbulence hits. Part of [[concept-strategic-deep-diving]].
- **Five Levels of Agent Autonomy** — Observer / Consultant / Collaborator / Approver / Operator. See [[framework-deepmind-autonomy-levels]].
- **Incompressible Experience** — Deep human intuition cannot be speedrun by AI; requires actual time and friction. See [[concept-incompressible-experience]].
- **Intent Engineering** — Translating organizational purpose into machine-readable parameters. See [[concept-intent-engineering]].
- **Intent Gap** — The three-layer gap (context infra → coherent toolkit → intent proper) organizations must close. See [[framework-intent-gap-layers]].
- **Intent Race** — Competitive advantage will go to companies with best intent encoding, not smartest models. See [[claim-intent-race]].
- **Liability Vertical** — The business of absorbing legal/financial risk for AI actions. See [[concept-vertical-liability]].
- **Machine-Readable OKRs** — Explicit, structured translation of OKRs that agents can act on. See [[concept-machine-readable-okrs]].
- **MCP (Model Context Protocol)** — Open protocol from Anthropic for connecting AI to organizational data sources. See [[entity-mcp]].
- **Moving the Floor** — Increase in baseline AI capability requiring less human hand-holding. See [[concept-moving-the-floor]].
- **Mythos** — Anthropic's reportedly held-back model; subject of [[question-mythos-release]].
- **Operator (autonomy level)** — Highest-autonomy agent acting fully autonomously; requires fully machine-readable intent.
- **Private Bench** — Proprietary suite of adversarial real-world tasks designed to fail frontier models. See [[concept-private-bench]] / [[framework-private-bench-suite]].
- **Production Trust** — No frontier model earns one-shot trust on production data; trust is built systemically via validation. See [[concept-production-trust]].
- **Progressive Intent Discovery** — Frontier LLMs deduce user intent from messy unstructured input. See [[concept-progressive-intent-discovery]].
- **Prompt Engineering** — Individual, session-based instruction crafting; the legacy discipline. See [[concept-prompt-engineering]].
- **Quality Without a Name (QWAN)** — Christopher Alexander's term for intuitive product 'rightness' that resists explicit specification. See [[concept-quality-without-a-name]].
- **RAG (Retrieval-Augmented Generation)** — Standard technique for grounding LLM outputs in retrieved documents. See [[prereq-rag-pipelines]].
- **Reference-to-Code Workflow** — Three-step recipe (mockup → build → ship) using one model for taste and another for execution. See [[framework-reference-ui-workflow]].
- **Shadow Agents** — Unsanctioned team-built AI workflows; AI's equivalent of Shadow IT. See [[concept-shadow-agents]].
- **Splash Brothers (Private Bench test)** — Backend correctness/data-hygiene test: migrate 465 messy files to clean DB. See [[framework-private-bench-suite]].
- **Strategic Deep Diving** — Fluidly shifting between architectural altitude and line-by-line debugging. See [[concept-strategic-deep-diving]].
- **Strategic Litmus Test** — *\"What do I own that still matters if AI gets 10× better?\"* See [[framework-strategic-litmus-test]] / [[quote-strategic-litmus-test]].
- **Success-at-Wrong-Metric** — Optimization-aligned but objective-misaligned AI; the most dangerous failure mode. See [[contrarian-success-is-failure]].
- **System Around the Weights** — System-level frame: tooling, files, browser, memory, image gen, validation matter as much as model weights. See [[concept-system-matters]] / [[quote-system-around-weights]].
- **Taste Vertical** — Human editorial judgment as a durable moat in the agentic economy. See [[concept-vertical-taste]].
- **Temporal Separation** — Separating Build Mode from Reflect Mode. See [[concept-temporal-separation]].
- **Thin Wrappers** — Software products that are merely UI layers over foundation models; structurally vulnerable. See [[concept-thin-wrappers]].
- **Three Disciplines (Prompt → Context → Intent)** — Sequential evolution of human-AI interface design. See [[concept-intent-engineering]] for the table.
- **Trust Vertical** — Verification, safety signals, and routing in a flooded web. See [[concept-vertical-trust]].
- **Unified Context Infrastructure** — Composable, vendor-agnostic, centrally-governed context layer; Layer 1 of the Intent Gap. See [[concept-unified-context-infrastructure]].
- **Vibe Coding** — Generating and deploying AI code without understanding it. See [[concept-vibe-coding]].
- **Visual Taste vs. Information Density** — Tradeoff between Opus-style aesthetic composition and GPT-5.5-style information density. See [[concept-visual-taste-vs-density]].
- **10× Litmus** — Shorthand for [[framework-strategic-litmus-test]].

