---
id: "concept-error-baking"
type: "concept"
source_timestamps: ["00:07:56"]
tags: ["system-failure", "llm-hallucinations"]
related: ["concept-write-time-synthesis", "concept-ai-wiki", "claim-wiki-breaks-at-scale"]
definition: "The phenomenon where an AI's misinterpretations or omissions during data ingestion become permanently locked into a knowledge base, compounding over time."
sources: ["s11-wiki-vs-open-brain"]
sourceVaultSlug: "s11-wiki-vs-open-brain"
originDay: 11
---
# Error Baking

# Error Baking

> The phenomenon where an AI's misinterpretations or omissions during data ingestion become permanently locked into a knowledge base, compounding over time.

## What It Is

**Error Baking** is a critical failure mode inherent to AI systems that rely on [[concept-write-time-synthesis]] (such as the [[concept-ai-wiki]]). Every time an AI converts a raw source document into a summarized wiki page, it makes editorial decisions. If the AI hallucinates, drops crucial nuance, or misinterprets a connection, that synthesized error is *written into* the markdown file.

## Why It Compounds

Because future queries rely on reading the wiki page rather than the raw source, the error becomes locked in as foundational knowledge. As the AI builds new syntheses on top of these flawed pages, the errors compound, creating a permanent, systemic misunderstanding that is incredibly difficult to trace back to the original source.

## Related Failure Modes

- [[concept-silent-contradictions]] — wikis tend to resolve contradictions by overwriting one truth, losing strategic signal.
- [[concept-wiki-staleness]] — synthesized pages drift out of sync with new raw data.
- [[claim-wiki-breaks-at-scale]] — error baking is one of the reasons wikis break at scale.

## Mitigation

The [[concept-hybrid-memory-architecture]] mitigates error baking by treating wiki pages as disposable presentation artifacts that can be regenerated from a pristine database (see [[quote-database-is-truth]]).

## Conventional View Challenged

See [[contrarian-dashboards-hide-truth]] — readable AI summaries are dangerous precisely because they hide raw truth.


## Related across days
- [[concept-silent-contradictions]]
- [[concept-silent-failure]]
- [[claim-illusion-of-judgment]]
- [[concept-wiki-staleness]]
- [[arc-silent-failure-taxonomy]]
