# Unified Glossary — The Nate B. Jones AI Series

> Alphabetical merge across all 30 videos. Definitions deduped to one line each. Where the same term is named differently across videos, all variant IDs are listed.

- **5 Levels of Vibe Coding** — Dan Shapiro's six-stage taxonomy (Levels 0–5) of AI integration in software, from spicy autocomplete to Dark Factory. ([[framework-5-levels-vibe-coding]])
- **Adaptive Thinking** — A model-controlled mechanism that autonomously scales reasoning compute per query, removing user knobs. ([[concept-adaptive-thinking]])
- **Adversarial Twin** — The inevitable malicious mirror of every legitimate AI capability. ([[concept-adversarial-twin]])
- **Agent Door** — The MCP-based programmatic interface that lets an AI agent read/write a personal database. ([[concept-agent-door]])
- **Agent Discovery** — The missing internet infrastructure for autonomous agents to find, vet, and transact with services. ([[concept-agent-discovery]])
- **Agent Environment Readiness** — The degree to which a codebase has the hygiene needed for an autonomous agent to succeed. ([[concept-agent-environment-readiness]])
- **Agent Software UI** — The UX paradigm shift from chatbot to long-running agentic workspace ('a little guy in the computer'). ([[concept-agent-software-ui]])
- **Agent Web** — The internet's API/vector/structured-data side, contrasted with the Human Web of pages and folders. ([[concept-agent-web]])
- **Agent-Callable Primitive** — Image generation as a subroutine called by autonomous agents to produce intermediate data. ([[concept-agent-callable-primitive]])
- **Agent-Ready Business** — A business optimized for agent traffic: Fast, Easy, MCP-ready. ([[concept-agent-ready-business]])
- **Agentic Economy** — The emerging paradigm where autonomous agents transact and execute on behalf of users. ([[concept-agentic-economy]])
- **Agentic Memory** — Database-backed agent memory free of human recency bias. ([[concept-agentic-memory]])
- **Agentic Operating System** — A foundational computing environment designed natively for autonomous AI agents. ([[concept-agentic-operating-system]])
- **Agentic Persistence** — A model's ability to stay on task and self-verify across long multi-step workflows. ([[concept-agentic-persistence]])
- **AI Brick Wall** — The collision of exponential AI ambition with linear physical infrastructure constraints. ([[concept-ai-brick-wall]])
- **AI Energy Function** — The thesis that AI is fundamentally a function of energy costs. ([[concept-ai-energy-function]])
- **AI Flywheel** — The compounding effect where personal AI infrastructure benefits automatically as foundation models improve. ([[concept-ai-flywheel]])
- **AI Fluency vs. AI Activity** — Individual AI activity (~30% gain) vs. organizational fluency (~300% gain). ([[concept-ai-fluency-vs-activity]])
- **AI Memory Crisis** — Memory demand outrunning HBM supply, creating the binding constraint on AI scaling. ([[concept-ai-memory-crisis]])
- **AI Reviewing AI** — Automated AI-to-AI evaluation loops that pre-audit work before human review. ([[concept-ai-reviewing-ai]])
- **AI Skill Hierarchy** — Four tiers: Prompt → Context → Intent → Specification. ([[framework-ai-skill-hierarchy]])
- **AI Task Cannibalization** — AI absorbing the routine tasks that historically trained junior employees. ([[concept-ai-task-cannibalization]])
- **AI Wiki** — Karpathy's proactive-writer model treating AI as an editor of a markdown knowledge base. ([[concept-ai-wiki]])
- **AI as the Great Equalizer** — The claim that AI removes traditional gatekeepers (capital, networks, education). ([[concept-ai-as-equalizer]])
- **Alternative Compute Geography** — Migration of AI data centers to regions with fewer regulatory constraints (primarily Asia). ([[concept-alternative-compute-geography]])
- **Anchored Iterative Summarization** — Context-compression with immutable structured sections + explicit merges. ([[concept-anchored-iterative-summarization]])
- **Archaeological Programming** — Reverse-engineering opaque AI-generated codebases (Addy Osmani, 2024). ([[concept-archaeological-programming]])
- **Artifact Layer** — The output layer linking deliverables to the prompts that produced them (Layer 4 of context). ([[concept-artifact-layer]])
- **Availability as Quality** — Uptime as a first-class evaluation dimension for frontier models. ([[concept-availability-as-quality]])
- **Behavioral Relationship** — Implicit AI-to-user relating preferences absorbed over thousands of interactions (Layer 3). ([[concept-behavioral-relationship]])
- **Bitter Lesson of LLMs** — The realization that human-engineered procedural complexity degrades sufficiently capable models. ([[concept-bitter-lesson-llms]])
- **Blast Radius** — The worst-case impact of an AI's failure (a guardrail-design metric). ([[concept-blast-radius]])
- **Bring Your Own Context (BYOC)** — Architectural pattern of self-hosting your AI context layer in open formats.
- **Build-Layer Collapse** — The commoditization of software production, eliminating moat at the building layer. ([[concept-build-layer-collapse]])
- **Can It Carry?** — The new frontier-model evaluation question: not 'can it answer' but 'can it sustain a multi-step deliverable'. ([[concept-can-it-carry]])
- **Capability Race** — Competition won by raw model-shipping velocity rather than product polish. ([[concept-capability-race]])
- **Career Ladder Collapse** — The structural disassembly of corporate career progression as AI cannibalizes entry-level work. ([[concept-career-ladder-collapse]])
- **Cascading Failure** — In multi-agent systems, errors flow downstream unverified. ([[concept-cascading-failure]])
- **Chinese Native Chip Stack** — China's pursuit of a sanction-proof, vertically integrated semiconductor ecosystem. ([[concept-chinese-native-chip-stack]])
- **Clean Conversation Workflow** — Five-step user recipe for low-cost, high-quality AI sessions. ([[framework-clean-conversation]])
- **Cloud AI Economics** — The variable-cost AI model where every query costs the provider GPU compute. ([[concept-cloud-ai-economics]])
- **Coherent Frames** — Multi-panel image generation maintaining character/style continuity (up to 8 panels). ([[concept-coherent-frames]])
- **Collapsed Purchase Funnel** — Discovery → consideration → conversion compressed into a single AI conversation context window. ([[concept-collapsed-purchase-funnel]])
- **Command Line Design** — Design execution moving from visual canvases to terminal-based AI agents. ([[concept-command-line-design]])
- **Complete Session Persistence** — Saving the entirety of an agent's state for crash recovery. ([[concept-complete-session-persistence]])
- **Composable Lego Bricks** — Skill modularity metaphor: single-purpose packages that compose at runtime. ([[concept-composable-lego-bricks]])
- **Comprehension Gap** — The skipped 'understand' phase in AI-augmented SDLC. ([[concept-comprehension-gap]])
- **Comprehension Gate** — Mandatory senior review of AI-generated PRs for legibility and architectural intent. ([[concept-comprehension-gate]])
- **Confidently Wrong** — Fluent, plausible AI output that masks factual or execution error. ([[concept-confidently-wrong]])
- **Constrained Agent Types** — Sharply scoped agent roles with their own prompts and allowed tools (Explore/Plan/Verify/etc.). ([[concept-constrained-agent-types]])
- **Context Architecture** — The 'Dewey Decimal System for agents' — supplying the right info at the right time. ([[concept-context-architecture]])
- **Context Degradation** — Quality drop in long sessions; mid-context retrieval accuracy declines. ([[concept-context-degradation]])
- **Context Engineering** — Restructuring information state inside a codebase or system to embed comprehension. ([[concept-context-engineering-d23]] · [[concept-context-engineering-d24]])
- **Context Graph** — Intermediate relationship-mapping layer between raw DB and compiled wiki. ([[concept-context-graph]])
- **Context Rot** — Long-running agent loss of constraints across sessions. ([[concept-context-rot]])
- **Context Sprawl** — Negative compounding of long, unbroken chat sessions on cost and reasoning. ([[concept-context-sprawl]])
- **Continual Learning** — Models that update weights post-deployment in response to use. ([[concept-continual-learning]])
- **Continuous Rotation** — AI disruption as permanent rolling state, not one-time event. ([[concept-continuous-rotation]])
- **Contribution Badge** — Legacy psychological need to perform unnecessary pre-prompting structuring. ([[concept-contribution-badge]])
- **Contextual Permission Handlers** — Permissions as stateful objects whose behavior changes with execution context. ([[concept-contextual-permission-handlers]])
- **Conversational Advertising** — Programmatic ads integrated directly into conversational AI interfaces. ([[concept-conversational-advertising]])
- **Coordination Load** — The admin friction (context-finding, data-moving, rubric-applying) surrounding judgment. ([[concept-coordination-load]])
- **Creative Ops** — A dedicated org function maintaining brand-asset master prompt templates. ([[concept-creative-ops]])
- **Creativity Cost Collapse** — Marginal cost of high-fidelity creative artifacts trending toward zero. ([[concept-creativity-cost-collapse]])
- **Cross-Category Reasoning** — Agent ability to connect insights across silos when data lives in one DB. ([[concept-cross-category-reasoning]])
- **Dark Code** — Production AI-generated code that passed tests, was never understood by any human. ([[concept-dark-code]])
- **Dark Factory** — Level 5 of vibe coding: specs in, working software out, no human review. ([[concept-dark-factory]])
- **Data Center NIMBYism** — Local political/regulatory resistance to AI data centers. ([[concept-data-center-nimbyism]])
- **Data Oblivious Algorithm** — Algorithm whose execution path is independent of input data. ([[concept-data-oblivious-algorithm]])
- **Data-Dominated Agent Design** — Pike's Rule 5 applied to agents: data structures determine reliability. ([[concept-data-dominated-agent-design]])
- **DeepMind 5 Levels of Agent Autonomy** — Observer / Consultant / Collaborator / Approver / Operator. ([[framework-deepmind-autonomy-levels]])
- **Description as Routing Signal** — A skill's description is the agent's invocation cue, not a label. ([[concept-description-routing-signal]])
- **Design Markdown (`design.md`)** — Agent-readable design system spec for cross-tool design portability. ([[concept-design-markdown]])
- **Device Shift Model** — Three-step framework for compute paradigm transitions (mainframe → PC; cloud AI → local AI). ([[framework-device-shift]])
- **Digital Twin Universe** — Behavioral clones of every external service for safe agent integration testing. ([[concept-digital-twin-universe]])
- **Discipline Gap** — Inefficiency from human fatigue/emotion that AI exploits via flawless execution. ([[concept-discipline-gap]])
- **Distributed Authorship** — Code ownership fragmentation when non-engineers ship AI-generated code. ([[concept-distributed-authorship]])
- **Domain Encoding** — What the AI knows about your industry/world (Layer 1 of context). ([[concept-domain-encoding]])
- **Dual Logging and System Events** — Immutable system event log alongside the conversational transcript. ([[concept-dual-logging-system-events]])
- **Dynamic Tool Pool Assembly** — Selecting a contextual subset of tools per session for performance. ([[concept-dynamic-tool-pool-assembly]])
- **Edge Case Detection** — A sub-skill of evaluation; recognizing what's missing/handled poorly at margins. ([[concept-edge-case-detection]])
- **Editorial Function** — Human application of context, politics, and prioritization to raw information. ([[concept-editorial-function]])
- **Embedded Deterministic Compute** — Compiling deterministic interpreters directly into transformer weights (Percepta). ([[concept-embedded-deterministic-compute]])
- **Engineering Manager Mindset** — Identity shift from individual contributor to manager of agent teams. ([[concept-engineering-manager-mindset]])
- **Enterprise Agent Wrapper** — Secure policy-driven layer (e.g., NeMo Claw) wrapping open-source agentic OS. ([[concept-enterprise-agent-wrapper]])
- **Error Baking** — AI editorial mistakes locked permanently into knowledge artifacts as truth. ([[concept-error-baking]])
- **EUV Helium Consumption** — A 300mm EUV fab consumes 5,000–20,000 m³ of helium per month. ([[concept-euv-helium-consumption]])
- **Evaluation & Quality Judgment** — Skill #2 in the 7-skill stack; build automated eval harnesses. ([[concept-evaluation-quality-judgment]])
- **Evidence Baseline Collapse** — Loss of trust in digital visual evidence due to free, flawless forgery. ([[concept-evidence-baseline-collapse]])
- **Experiential Debt** — The creator's loss of mental model of their own AI-generated product. ([[concept-experiential-debt]])
- **File Over App** — Storing knowledge in open, durable formats you control rather than proprietary SaaS. ([[concept-file-over-app]])
- **Five Durable Verticals** — Trust, Context, Distribution, Taste, Liability. ([[framework-5-durable-verticals]])
- **Fragmentation Gap** — Inefficiency where information is siloed and intermediaries charge for aggregation. ([[concept-fragmentation-gap]])
- **Functional Organization** — Structure divided by function (HW, SW, Services, Design) rather than product line. ([[concept-functional-organization]])
- **Gather vs. Focus Modes** — Two-mode workflow separating divergent research from convergent execution. ([[concept-gather-vs-focus]])
- **Guardrails & Security Design** — Probabilistic agents inside deterministic containers. ([[concept-guardrails-security-design]])
- **Hard Wiring vs. Skills** — Use scripts for deterministic behavior; skills for judgment. ([[concept-hard-wiring-vs-skills]])
- **Harness Engineering** — Optimizing the scaffolding (prompts, tool defs, routing) around an LLM. ([[concept-harness-engineering]])
- **Helium Fab Dependency** — Helium's irreplaceable role in plasma etching and EUV leak detection. ([[concept-helium-fab-dependency]])
- **High Agency** — Internal locus of control + tight say/do ratio (NOT a feeling). ([[concept-high-agency]])
- **Hollowing Out of the Junior Pipeline** — AI removes apprenticeship work, creating a senior-architect supply crisis. ([[concept-hollowing-out-junior-pipeline]])
- **Honing Effect** — Continuous AI alignment to a user's cognitive pathways; basis of platform lock-in. ([[concept-honing-effect]])
- **Human Door** — The bespoke visual web app side of an Open Brain architecture. ([[concept-human-door]])
- **Hybrid Memory Architecture** — DB-as-truth + disposable wiki presentation layer. ([[concept-hybrid-memory-architecture]])
- **Implicit Context** — Preferences absorbed passively over thousands of AI interactions. ([[concept-implicit-context]])
- **Incompressible Experience** — Human taste/intuition cannot be speedrun by AI. ([[concept-incompressible-experience]])
- **Inference Wall** — Decoupling of serving cost from consumer willingness to pay. ([[concept-inference-wall]])
- **Information Routing** — Logistical synthesis and movement of org data; highly automatable. ([[concept-information-routing]])
- **Intelligence Arbitrage** — Unit of value shifts from person-hour to delivered outcome. ([[concept-intelligence-arbitrage]])
- **Intent Engineering** — Translating organizational purpose into machine-readable parameters. ([[concept-intent-engineering]])
- **Interpretive Boundary** — UI/structural distinction between facts and inferences in AI output. ([[concept-interpretive-boundary]])
- **J-Curve of AI Productivity** — Initial productivity dip from bolting AI onto legacy workflows; eventual rise. ([[concept-j-curve-productivity]])
- **K-Shaped AI Job Market** — Traditional roles flat; AI roles infinite demand; severe bifurcation. ([[concept-k-shaped-job-market]])
- **Karpathy Loop** — A constrained, iterative AI self-improvement cycle (one file, one metric, one budget). ([[concept-karpathy-loop]])
- **Karpathy Triplet** — One Editable Surface + One Metric + One Time Budget. ([[concept-karpathy-triplet]])
- **KISS Commands** — KISS commandments for agent architecture (index references, pre-process, cache, scope, measure). ([[framework-kiss-commands]])
- **KV Cache** — Working memory of an LLM during inference; key-value pairs of prior tokens. ([[concept-kv-cache]])
- **Labor Arbitrage** — Historical practice of buying person-hours cheaply via geographic wage differences. ([[concept-labor-arbitrage]])
- **Lean Unicorns** — Billion-dollar companies built by radically small teams (200 or fewer employees, possibly 1). ([[concept-lean-unicorns]])
- **Least Privilege Agents** — Scoping agent permissions to the bare minimum required. ([[concept-least-privilege-agents]])
- **Librarian Metaphor** — Database AI: pristine raw sources retrieved on demand. ([[concept-librarian-metaphor]])
- **Liquid Helium Boil-Off** — 35–48 day shipping window before liquid helium vaporizes; 'helium goes bad on a container ship'. ([[concept-liquid-helium-boil-off]])
- **Literal Instruction Following** — Model behavior of executing exactly the words written. ([[concept-literal-instruction-following]])
- **Live Data Rendering** — Image model querying the live web during generation. ([[concept-live-data-rendering]])
- **LNG-Helium Production Link** — Helium is a byproduct of LNG production, inseparable in supply chain shocks. ([[concept-lng-helium-production-link]])
- **Local AI Economics** — Fixed-cost local-compute AI; marginal inference cost ~zero. ([[concept-local-ai-economics]])
- **Local Hard Takeoff** — Steep, sudden, compounding autonomous improvement bounded to a specific business domain. ([[concept-local-hard-takeoff]])
- **Locus of Control Circle** — Five-step diagnostic exercise to surface internal vs external attribution. ([[framework-locus-of-control]])
- **Long-Running Agents** — Agents that run for days or a week, burning millions of tokens autonomously. ([[concept-long-running-agents]])
- **Machine-Readable OKRs** — Explicit translation of OKRs into structured parameters that agents can act on. ([[concept-machine-readable-okrs]])
- **Mainframe Echo** — 1970s rented mainframes → 2020s cloud AI, repeating the PC disruption pattern. ([[concept-mainframe-echo]])
- **Management Unbundling** — Management is two functions (information routing + editorial judgment), not one. ([[concept-management-unbundling]])
- **Markdown Conversion** — Converting heavy formats (PDF/DOCX) to Markdown for ~20x token savings. ([[concept-markdown-conversion]])
- **Memory Application Layer** — Synthesized agentic memory layer (compression + markdown + background agents) projected for summer 2026. ([[concept-memory-application-layer]])
- **Memory Optimization Landscape** — Five vectors: quantization, eviction/sparsity, architecture, tiering, attention-opt. ([[framework-memory-optimization-landscape]])
- **Memory Silo Problem** — Walled-garden memory features that fragment user context across vendors. ([[concept-memory-silo-problem]])
- **Meta-Agent / Task Agent Split** — Architecture where one agent does work, another optimizes scaffolding. ([[concept-meta-task-agent-split]])
- **Metadata-First Tool Registry** — Tools defined as queryable data structures before execution logic. ([[concept-metadata-first-tool-registry]])
- **Metric Gaming** — Agent exploits eval loopholes (Goodhart's Law in agent form). ([[concept-metric-gaming]])
- **Middle Management Deletion** — Coordination roles (Scrum/TPM/release mgmt) eliminated by agentic AI. ([[concept-middle-management-deletion]])
- **Middleware Squeeze** — SaaS design tools absorbed by foundational models. ([[concept-middleware-squeeze]])
- **Missing Apple Stack** — Lack of rackable Macs, clustering software, MDM for distributed Mac Mini fleets. ([[concept-missing-apple-stack]])
- **Model Context Protocol (MCP)** — Open bidirectional standard for AI-to-data read/write ('USB-C for AI'). ([[concept-mcp-d18]] · [[concept-mcp-d21]] · [[concept-mcp-d24]] · [[concept-mcp-d28]] · [[concept-mcp-d48]] · [[concept-model-context-protocol]])
- **Model Empathy** — Same-model Meta+Task pairings outperform cross-model by 15–20% on harness tuning. ([[concept-model-empathy]])
- **Model Self-Review Bias** — LLMs exhibit distinct biases when grading own/competitor outputs. ([[concept-model-self-review-bias]])
- **Model-Driven Retrieval** — Exposing raw repositories to the model and letting it navigate, not hardcoded RAG. ([[concept-model-driven-retrieval]])
- **Moving the Floor** — Default no-extra-compute baseline rising as a true model upgrade. ([[concept-moving-the-floor]])
- **Multi-Direction Design** — Generating up to 5 distinct UI directions per prompt. ([[concept-multi-direction-design]])
- **Multi-Head Latent Attention (MLA)** — DeepSeek v2 architectural redesign using lower-dim latent K/V. ([[concept-multi-head-latent-attention]])
- **Multi-Level Verification** — Tests verify the agent AND the harness. ([[concept-multi-level-verification]])
- **Multi-LLM Refinement** — Use one model to critique another's skill/output. ([[concept-multi-llm-refinement]])
- **Mythos Readiness Transformation** — Four-step org transformation for step-change frontier models. ([[framework-mythos-readiness]])
- **Native AI Apps** — Apps designed assuming local inference is free; continuous, agentic, full-history. ([[concept-native-ai-apps]])
- **Negative Lift** — Net productivity loss when review time exceeds time saved. ([[concept-negative-lift]])
- **Non-Technical Engineering** — Non-technical roles adopting strict engineering paradigms (specs, evals). ([[concept-non-technical-engineering]])
- **Open Brain** — Personal/enterprise database-backed agent-readable memory on open protocols. ([[concept-open-brain-d21]] · [[concept-open-brain-d22]] · [[concept-openbrain-architecture]])
- **Oracle vs. Maintainer** — Reactive chatbot vs. proactive curator paradigm shift. ([[concept-oracle-vs-maintainer]])
- **Orchestrator Pattern** — Master skill routes work to specialized sub-agents based on descriptions. ([[concept-orchestrator-pattern]])
- **Outcome Encoding** — Logging not just actions but their results, for compounding feedback. ([[concept-outcome-encoding]])
- **Outcome-Driven Prompting** — Specifying only desired end state and constraints, not procedure. ([[concept-outcome-driven-prompting]])
- **pgvector** — Open-source vector similarity search extension for PostgreSQL. ([[entity-pgvector]])
- **Plasma Etching Thermal Management** — Helium blown across wafer back to maintain uniform temperature. ([[concept-plasma-etching-thermal-management]])
- **Polar Quantization** — Step 1 of Turboquant: rotate tensor data into polar coordinates. ([[concept-polar-quantization]])
- **Power Law of Adoption** — Top 1–5% of companies rebuild around agents and ship 10x–100x faster. ([[concept-power-law-of-adoption]])
- **Power of Siberia 2** — Stalled Russia-China gas pipeline; would carry helium byproduct overland. ([[concept-power-of-siberia-2]])
- **Predictive Token Budgeting** — Calculate projected token use before each call; halt if over-budget. ([[concept-predictive-token-budgeting]])
- **Private Bench** — Adversarial private evaluation suite for frontier models. ([[concept-private-bench]])
- **Private Cloud Compute Limits** — Apple's PCC fails legal chain-of-custody for regulated professions. ([[concept-private-cloud-compute-limits]])
- **Proactive AI** — AI that prompts the human, not vice versa. ([[concept-proactive-ai]])
- **Production Trust** — No model trusted blindly with one-shot production data; verify systemically. ([[concept-production-trust]])
- **Professional Capital — 5th Category** — AI Working Intelligence as career asset alongside skills/network/knowledge/resume. ([[concept-professional-capital]])
- **Programmable Video** — Video as code; React components that render pixels. ([[concept-programmable-video]])
- **Progressive Intent Discovery** — Frontier LLMs deducing intent from messy unstructured input. ([[concept-progressive-intent-discovery]])
- **Prompt Caching** — API feature giving 90% discount on cached stable input tokens. ([[concept-prompt-caching]])
- **Prompt Dependency / Tyranny of the Prompt** — Bottleneck where complex work requires repetitive long prompting. ([[concept-prompt-dependency]])
- **Prompt Engineering** — The legacy individual instruction-crafting discipline. ([[concept-prompt-engineering]])
- **Qatar Ras Laffan Chokepoint** — Single Qatari complex producing ~33% of global helium. ([[concept-qatar-ras-laffan-chokepoint]])
- **QJL (Quantized Johnson-Lindenstrauss)** — Step 2 of Turboquant: single-bit error-correction. ([[concept-qjl]])
- **Quality Without a Name (QWAN)** — Christopher Alexander's term for intuitive product 'rightness'. ([[concept-quality-without-a-name]])
- **Quantitative Skill Testing** — Automated test suites gating skill version updates. ([[concept-quantitative-skill-testing]])
- **Query-Time Synthesis** — AI synthesizes when prompted, not at ingest. ([[concept-query-time-synthesis]])
- **Race Conditions in AI** — Multi-agent concurrent writes corrupting unstructured files. ([[concept-race-conditions-ai]])
- **Reasoning Gap** — Delay in human interpretation of complex new info that AI exploits. ([[concept-reasoning-gap]])
- **Reasoning Stack Integration** — LLM reasoning placed upstream of pixel/output generation. ([[concept-reasoning-stack-integration]])
- **Recursive Self-Improvement** — AI training AI; operationalized in 2026. ([[concept-recursive-self-improvement]])
- **Reference-to-Code UI Workflow** — Mockup → Code → Ship: bypass LLM blank-canvas weakness. ([[framework-reference-ui-workflow]])
- **Regulated AI Gap** — Lawyers, doctors, accountants legally barred from cloud AI. ([[concept-regulated-ai-gap]])
- **Reversibility** — Whether an AI's mistake can be undone (a guardrail metric). ([[concept-reversibility]])
- **Risk Segmentation Permissions** — Built-in / Plugin / Skill trust tiers with distinct loading. ([[concept-risk-segmentation-permissions]])
- **Rob Pike's 5 Rules** — Measure first, fancy-is-buggier, data dominates, etc. ([[framework-rob-pike-agent-rules]])
- **SaaS Per-Seat Collapse** — Per-seat pricing breaking as agents reduce headcount. ([[concept-saas-per-seat-collapse]])
- **Safety as Positioning** — Safety posture as GTM strategy with binary revenue consequences. ([[concept-safety-as-positioning]])
- **Say/Do Ratio** — Time/distance between stating an intention and executing it. ([[concept-say-do-ratio]])
- **Scenario Testing** — External, black-box behavioral scenarios; not in-repo unit tests. ([[concept-scenario-testing]])
- **Self-Verification Pass** — Model re-reads its own output and corrects errors. ([[concept-self-verification-pass]])
- **Semantic Context** — Embedded interface-level rules of engagement (perf, retry, behavioral contracts). ([[concept-semantic-context]])
- **Semantic Retrieval Architecture** — Vector-DB-based World Model. ([[concept-semantic-retrieval]])
- **Semantic Search via Vector Embeddings** — Retrieval by mathematical meaning, not keyword. ([[concept-semantic-search]])
- **Semantic vs. Functional Correctness** — Sounds right vs. actually executes correctly. ([[concept-semantic-vs-functional-correctness]])
- **Shadow Agents** — Unsanctioned team-built AI workflows; AI's Shadow IT. ([[concept-shadow-agents]])
- **Shared Surface** — Single DB table that both human UI and AI agent read/write directly. ([[concept-shared-surface]])
- **Shift in Skill Callers** — From humans (handful per chat) to agents (hundreds per run). ([[concept-shift-in-callers]])
- **Signal Extraction Framework** — Method for analyzing AI industry by ignoring big-bang releases and tracking constraints. ([[framework-signal-extraction]])
- **Signal Fidelity Architecture** — World Model built on pristine data exhaust like transactions (Jack Dorsey/Block). ([[concept-signal-fidelity]])
- **Silent Contradictions** — Conflicting facts coexisting unreconciled in a knowledge base. ([[concept-silent-contradictions]])
- **Silent Degradation** — Auto-optimization erodes secondary metrics; primary monitor stays green. ([[concept-silent-degradation]])
- **Silent Failure** — Plausible AI output masking execution error or flawed editorializing. ([[concept-silent-failure-d15]] · [[concept-silent-failure-d42]])
- **Silent Tax** — Hidden token cost from unused plugins/tool definitions in system prompt. ([[concept-silent-tax]])
- **Single Eval Gate** — One comprehensive end-of-pipeline evaluation replacing intermediate checks. ([[concept-single-eval-gate]])
- **Skill Anatomy** — A folder containing `skill.md` (metadata + methodology). ([[concept-skill-anatomy]])
- **Skill Composability** — Output of skill A as perfect input for skill B. ([[concept-skill-composability]])
- **Skill File Format (.skill)** — Machine-readable design system file consumable by other AI agents. ([[concept-skill-file-format]])
- **Skills as API Contracts** — Skills declare strict inputs/outputs/SLAs like APIs. ([[concept-skills-as-contracts]])
- **Skills vs. Prompts** — Version-controlled markdown files that compound vs. ephemeral text blocks. ([[concept-skills-vs-prompts]])
- **Smart Tokens** — Spend redirected from waste to reasoning. ([[concept-smart-tokens]])
- **Sora Economics** — $15M/day burn vs. $2.1M lifetime revenue, the canonical inference-wall example. ([[claim-sora-economics]])
- **Sovereign Memory** — Enterprise principle of owning your AI memory layer. ([[concept-sovereign-memory]])
- **Specialist Stack** — Folder of specialized skills replacing complex prompting. ([[concept-specialist-stack]])
- **Specification Drift** — Long-running agents forget their original spec. ([[concept-specification-drift]])
- **Specification Engineering** — The apex AI skill: precise problem specification atop persistent memory. ([[concept-specification-engineering]])
- **Specification Precision** — Skill #1 of the 7 AI skills; literal, exhaustive, unambiguous instructions. ([[concept-specification-precision]])
- **Specification vs. Execution** — Where human value moves as AI handles execution. ([[concept-specification-vs-execution]])
- **Specification-Driven Development** — Detailed specs precede AI generation; specs double as evals. ([[concept-spec-driven-development]])
- **Spec Quality Bottleneck** — The new constraint on engineering throughput, replacing implementation speed. ([[concept-spec-quality-bottleneck]])
- **Speed Gap** — Inefficiency where one actor updates pricing/state slower than reality. ([[concept-speed-gap]])
- **Step Change AI** — Rare paradigm-shifting capability jump (e.g., GB300-class), distinct from incremental. ([[concept-step-change-ai]])
- **Strategic Deep Diving** — Fluid altitude shifts between architecture and line-by-line debugging. ([[concept-strategic-deep-diving]])
- **Strategic Litmus Test** — *What do I own that still matters if AI gets 10x better?* ([[framework-strategic-litmus-test]])
- **Structural Context** — Module manifests answering where code belongs architecturally. ([[concept-structural-context]])
- **Structured Ontology Architecture** — Schema-defined World Model, e.g., Palantir. ([[concept-structured-ontology]])
- **Structured Streaming Events** — Streaming emits typed events exposing the model's reasoning. ([[concept-structured-streaming-events]])
- **Stupid Button** — Diagnostic checklist to audit egregious token-wasting habits. ([[concept-the-stupid-button]])
- **Super Prompts** — Massive structured payloads underneath a skill. ([[concept-super-prompts]])
- **Sycophantic Confirmation** — Agents agreeing with bad user data and building wrong answers around it. ([[concept-sycophantic-confirmation]])
- **System Matters Beyond Weights** — Model utility depends on its tooling stack as much as weights. ([[concept-system-matters]])
- **Task Decomposition** — Managerial skill of breaking projects into discrete delegated workstreams. ([[concept-task-decomposition]])
- **Temporal Separation** — Build Mode vs. Reflect Mode discipline. ([[concept-temporal-separation]])
- **Thin Wrappers** — Software products that are UI layers over a third-party foundation model. ([[concept-thin-wrappers]])
- **Thinking Mode** — 10–20s explicit reasoning phase before image generation. ([[concept-thinking-mode]])
- **Three Channels of Disruption** — Direct input loss + energy spike + geopolitical restructuring. ([[framework-three-channels-disruption]])
- **Three Tiers of Skills** — Standard / Methodology / Personal. ([[concept-three-tiers-skills]])
- **Token Burning** — Wasteful consumption of LLM tokens through inefficient practices. ([[concept-token-burning]])
- **Token Economics** — Skill #7: applied math of running AI in production. ([[concept-token-economics]])
- **Tokenizer Tax** — Stealth cost increase from a less-efficient tokenizer at unchanged sticker prices. ([[concept-tokenizer-tax]])
- **Tool Selection Error** — Agent picks wrong external tool, often due to weak descriptions. ([[concept-tool-selection-error]])
- **Tool Switching Penalty** — Productivity drop when moving from a calibrated AI to a fresh instance. ([[concept-tool-switching-penalty]])
- **Trace-Driven Optimization** — Optimizing agents from step-by-step execution traces, not just pass/fail. ([[concept-trace-driven-optimization]])
- **Training-Inference Chip Divergence** — Chips engineered for training are not optimized for inference. ([[concept-training-inference-chip-divergence]])
- **Transcript Compaction** — Summarize older entries while persisting full history elsewhere. ([[concept-transcript-compaction]])
- **Trust Failure via Hallucinated Audit Trails** — Agent fabricates a 'success' audit when it actually failed. ([[concept-trust-failure-hallucination]])
- **Turboquant** — Google's lossless 6x KV-cache compression algorithm (ICLR 2026). ([[concept-turboquant]])
- **Tutor Metaphor** — Wiki AI: pre-reads source material and writes a study guide. ([[concept-tutor-metaphor]])
- **Two-Class AI** — Market bifurcation: enterprise unconstrained, consumer throttled. ([[concept-two-class-ai]])
- **Unified Context Infrastructure** — Composable, vendor-agnostic, centrally governed context layer. ([[concept-unified-context-infrastructure]])
- **Upstream Migration** — Reallocating professional time from execution to judgment/taste/architecture. ([[concept-upstream-migration]])
- **Value Contribution Orientation** — Push value out, don't extract status. ([[concept-value-contribution-orientation]])
- **Vector Quantization** — Traditional compression with overhead 'quantization constants'. ([[concept-vector-quantization]])
- **Vertical Trust / Context / Distribution / Taste / Liability** — The five durable verticals where moats survive build-layer collapse. ([[concept-vertical-trust]] · [[concept-vertical-context]] · [[concept-vertical-distribution]] · [[concept-vertical-taste]] · [[concept-vertical-liability]])
- **Vibe Coding** — Generating and deploying AI code without understanding it. ([[concept-vibe-coding]])
- **Vibe Design** — Google Stitch's text-to-UI generation paradigm (objective + feeling + product concept). ([[concept-vibe-design]])
- **Visual Taste vs. Information Density** — Trade-off between dense-but-cartoonish (GPT-5.5) and grounded-but-sparse (Opus). ([[concept-visual-taste-vs-density]])
- **Wiki Staleness** — Pre-synthesized pages drifting from underlying data; *more dangerous than missing data*. ([[concept-wiki-staleness]])
- **Work AI vs. Personal AI Split** — Personal cozy/engagement vs. Work strict/governed. ([[concept-work-vs-personal-ai-split]])
- **Workflow Blocks / Creative Primitives** — AI capabilities as Lego blocks chained on the command line. ([[concept-workflow-blocks]])
- **Workflow Calibration** — How the AI structures work for you (Layer 2 of context). ([[concept-workflow-calibration]])
- **Workflow Collapse** — Sequential roles compressed into a single AI prompt execution. ([[concept-workflow-collapse]])
- **Workflow State Separation** — Task state distinct from chat history for safe recovery. ([[concept-workflow-state-separation]])
- **Workplace OS** — OpenAI's strategic ambition to be the default operating layer for corporate work. ([[concept-workplace-os]])
- **Workspace Agents** — OpenAI's cloud-based agent builder for repeatable team workflows. ([[concept-workspace-agents]])
- **World Model** — Living, always-updated software model of company reality, queryable by all employees. ([[concept-world-model]])
- **Write-Time Synthesis** — AI synthesizes at ingest, not at query time. ([[concept-write-time-synthesis]])
