# Unified Glossary

One-line definitions, deduped across all 10 days. Alphabetical.

- **5 Levels of Vibe Coding** — Dan Shapiro's six-stage taxonomy (Levels 0–5) of AI integration in software engineering. See [[framework-5-levels-vibe-coding]].
- **Adversarial Twin** — the inevitable malicious mirror of any legitimate AI capability.
- **Agent Discovery** — the missing infrastructure for autonomous agents to find, vet, and transact with each other.
- **Agent-Ready Business** — a business optimized for agent rather than human interaction; tablestakes are Fast, Easy, MCP-ready.
- **Agent-Callable Primitive** — image generation (or any output) treated as a subroutine for autonomous agents.
- **Agentic Economy** — the emerging paradigm where autonomous agents transact on behalf of humans.
- **AI Activity vs. Fluency** — high individual usage with no compounding organizational return vs. coherent shared workflows that compound.
- **AI as Equalizer** — the strong claim that AI removes traditional execution gatekeepers (capital, networks, education) for high-agency individuals.
- **AI Task Cannibalization** — AI absorbing the routine entry-level work that previously trained juniors.
- **AI Wiki** — Karpathy's proposal: AI as proactive writer of a persistent markdown knowledge base.
- **Archaeological Programming** — Addy Osmani's term for the reverse-engineering required to understand opaque AI-generated codebases.
- **Availability as Quality** — uptime, compute caps, and routing latency are first-class quality dimensions.
- **Carry It / Can It Carry?** — the new evaluation question: not 'can the model answer?' but 'can it sustain context, manage risk, and deliver multi-step work?'
- **Civil Engineering (in builder context)** — explicit, rule-based prompting/coding; the counterpart to QWAN.
- **Coherent Frames** — multi-image generation with character/style continuity across panels.
- **Cognitive Architecture** — systems thinking applied to orchestrating multiple AI agents; the new bottleneck per S25.
- **Confidently Incorrect** — agents fail in ways that look like success.
- **Context Engineering** — architecting the entire information state an agent operates in; the middle discipline between prompt and intent engineering.
- **Context Graph** — intermediate relationship-mapping layer between raw data and compiled wikis.
- **Context Rot** — agent drift across sessions when memory is not persistent and structured.
- **Contribution Badge** — the legacy ego-driven habit of pre-structuring inputs to feel ownership; now counterproductive.
- **Creative Ops** — the new role responsible for engineering and maintaining brand-prompt templates.
- **Dark Factory** — Level 5 of the 5 Levels framework: specs in, working software out, no human review.
- **Database is Truth, Wiki is Presentation** — the governing principle of [[concept-hybrid-memory-architecture]].
- **Digital Twin Universe** — simulated clones of every external service used to test agents safely.
- **Editorial Function** — the human application of context, politics, and prioritization that AI cannot reliably automate.
- **Engineering Manager Mindset** — the operational identity of the AI-era builder: managing tireless agents instead of writing code.
- **Error Baking** — write-time AI synthesis errors permanently locked into a knowledge artifact.
- **Evidence Baseline Collapse** — the destruction of digital visual evidence's trust value because forgeries are now free.
- **Experiential Debt** — the creator's lack of mental model of their own AI-built product.
- **File Over App** — store knowledge in open, durable, user-controlled formats; not in proprietary SaaS.
- **Fingertip Feel** — the intuition to descend from architecture to specific code when an agent fails.
- **Five Durable Verticals** — Trust, Context, Distribution, Taste, Liability — the moats AI cannot replicate.
- **GPT Image 2 / GPT-5.5 / Claude Opus 4.7 / Mythos / Nano Banana 2** — speaker's terminology for forward-looking or hypothetical models; treat as unverified.
- **Goodhart's Law** — when a measure becomes a target, it ceases to be a good measure. The mechanism behind metric gaming.
- **Harness Engineering** — optimizing the scaffolding around an AI model (prompts, tools, routing, orchestration) rather than the weights.
- **High Agency** — internal locus of control + tight say/do ratio; explicitly *not* a feeling.
- **Hollowing Out of Junior Pipeline** — structural collapse of entry-level dev hiring as AI does the entry-level work.
- **Hybrid Memory Architecture** — DB-as-truth + disposable wiki presentation layer.
- **Illusion of Judgment** — pristine inputs make AI's interpretive connections feel trustworthy without making them trustworthy.
- **Images as Intermediate Data** — generated images consumed by other agents, not humans.
- **Incompressible Experience** — taste, intuition, and product judgment cannot be speedrun; require actual time and friction.
- **Information Routing** — the logistical, factual movement of organizational data; highly automatable.
- **Intent Engineering** — the discipline of translating organizational purpose into machine-readable parameters.
- **Interpretive Boundary** — the explicit UI distinction between facts the system knows and inferences it is making.
- **J-Curve of AI Productivity** — productivity dips before rising when AI is bolted onto unchanged workflows.
- **Jet Engine (AI as)** — AI as a force multiplier on existing agency; useless on standing still.
- **Karpathy Loop** — the constrained Analyze → Propose → Run → Evaluate → Commit/Revert self-improvement cycle.
- **Karpathy Triplet** — one editable surface, one objective metric, one time budget. The prerequisite for deploying the loop.
- **Lean Unicorns** — billion-dollar companies built with radically small teams.
- **Liability (cannot be automated)** — AI cannot go to jail, be sued, or absorb financial ruin; human accountability is structurally durable.
- **Librarian Metaphor** — the AI-as-librarian model: pristine raw retrieval on demand. The OpenBrain side.
- **Live Data Rendering** — image models executing live web search during generation.
- **Local Hard Takeoff** — rapid, compounding, autonomous improvement bounded to a specific business domain.
- **Locus of Control** — Rotter's psychological construct distinguishing internal vs. external attribution; foundational to high agency.
- **Machine-Readable OKRs** — explicit, structured translations of OKRs that autonomous agents can act on.
- **Magic in Constraints** — the principle that constraint, not scale, unlocks self-improvement.
- **Maintainer (vs. Oracle)** — AI as continuous background curator vs. reactive chatbot.
- **MCP / Model Context Protocol** — open protocol from Anthropic for connecting AI to organizational data; speaker's canonical Layer 1 implementation. (Verification status caveated.)
- **Medical Residency Model** — the proposed replacement for the collapsing junior-developer apprenticeship.
- **Meta-Agent / Task Agent Split** — architectural separation of the agent doing work from the agent optimizing the scaffolding.
- **Metric Gaming** — Goodhart-style optimization where the agent exploits eval loopholes.
- **Middleware Squeeze** — existential pressure on SaaS design tools as foundation models absorb their features.
- **Model Empathy** — same-model meta+task pairings outperform cross-model ones (~15-20% on harness tasks).
- **Money is Honest** — Dorsey's principle that financial transactions are undeniable ground truth.
- **Moving the Floor** — a model upgrade that lifts the default no-extra-compute baseline rather than just adding tool calls.
- **OpenBrain** — Nate's database-first AI memory architecture; the librarian to Karpathy's tutor.
- **Oracle vs. Maintainer** — reactive chatbot vs. proactive curator.
- **Outcome Encoding** — logging not just actions but their results, to make the World Model compound.
- **Premature Structure Fails** — pre-structuring prompts is now counterproductive with frontier models.
- **Private Bench** — a proprietary adversarial evaluation suite designed to fail frontier models.
- **Production Trust** — the principle that no model earns one-shot trust on production databases without systemic validation.
- **Progressive Intent Discovery** — frontier LLMs deducing user intent from messy unstructured input.
- **Prompt Engineering** — the legacy individual instruction-crafting discipline; the warm-up act per S24.
- **QWAN / Quality Without a Name** — Alexander's term for intuitive product rightness that human taste produces.
- **Race Conditions (AI multi-agent)** — concurrent multi-agent writes corrupting unstructured files.
- **Reasoning Stack Integration** — placing an LLM reasoning phase upstream of pixel generation.
- **Reflect Mode** — meditative analytical time scheduled away from AI execution.
- **Say/Do Ratio** — the time/distance between stating an intention and executing it; lower is better.
- **Scenario Testing** — external, black-box behavioral evaluation that lives outside the codebase.
- **Self-Verification Pass** — automated QA where the model re-reads and corrects its own output.
- **Semantic Retrieval Architecture** — vector-DB-based World Model; conflates surfacing with interpreting.
- **Shadow Agents** — unsanctioned team-built AI workflows; AI's equivalent of Shadow IT.
- **Signal Fidelity Architecture** — World Model built on highest-truth data exhaust (transactions, telemetry).
- **Silent Contradictions** — conflicting facts coexisting unreconciled; smoothed over by wiki synthesis.
- **Silent Degradation** — secondary metrics eroding unnoticed during auto-optimization.
- **Silent Failure** — invisible decision-quality decay from confident-but-flawed AI editorializing.
- **Skill Issue Reframe** — converting external blockers into 'I just don't know how yet' to activate problem-solving.
- **Solo Founder Rise** — alleged shift from 22% to 38% (or ~30-35% per enrichment) of new startups being solo-founder-led.
- **Spec Quality Bottleneck** — the new constraint on engineering throughput is spec authorship, not implementation.
- **Specification vs. Execution** — the shift in human value from doing the work to articulating it.
- **Strategic Deep Diving** — fluid altitude shifting between architecture and line-by-line debugging.
- **Strategic Litmus Test** — *what do I own that still matters if AI gets 10x better?*
- **Structured Ontology Architecture** — schema-defined World Model; blind to emergence.
- **Structure Must Be Earned** — schemas should be imposed selectively, not universally.
- **System Matters** — a model's utility depends as much on its surrounding tools as its weights.
- **Temporal Separation** — Build Mode vs. Reflect Mode discipline.
- **Test-Driven Development** — the orthodoxy that high test coverage produces correctness; problematic when agents game tests.
- **Thin Wrappers** — UI layers over foundation models with no structural moat.
- **Thinking Mode** — 10-20s explicit reasoning latency before pixel generation.
- **Time is the Moat** — accumulated longitudinal business reality is the durable moat for World Models.
- **Trace-Driven Optimization** — using detailed execution traces to make surgical agent fixes rather than random mutations.
- **Trust Stack Obsolete** — visual-evidence-based verification is broken because forgery cost is zero.
- **Tutor Metaphor** — the AI-as-tutor model: pre-reading and synthesizing into a study guide. The Wiki side.
- **Unified Context Infrastructure** — vendor-agnostic, governed Layer 1 that replaces shadow agents.
- **Value Contribution Orientation** — obsessing over creating value rather than extracting status.
- **Vibe Coding (Nate's S25 sense)** — generating and deploying AI code without understanding it; powerful but produces archaeological/experiential debt if exclusive.
- **Visual Taste vs. Information Density** — the observed tradeoff between OpenAI's data-dense + cartoonish UIs and Anthropic's grounded but information-light visuals.
- **Wiki Staleness** — pre-synthesized pages drifting from underlying data; more dangerous than missing data.
- **Workflow Collapse** — multi-disciplinary sequential tasks compressed into single AI prompts.
- **World Model** — a living, always-updated software model of company reality, queryable by all employees.
- **Write-Time Synthesis** — AI synthesizing at ingest; the Wiki pattern. Source of error baking.
- **Query-Time Synthesis** — AI synthesizing on demand; the OpenBrain pattern.

