---
id: "concept-context-degradation"
type: "concept"
source_timestamps: ["00:13:08", "00:13:14"]
tags: ["failure-modes", "llm-mechanics"]
related: ["framework-ai-failure-taxonomy", "concept-context-architecture", "prereq-basic-llm-understanding"]
definition: "An AI failure mode where output quality drops over time due to an overloaded or polluted context window."
sources: ["s42-job-market-split"]
sourceVaultSlug: "s42-job-market-split"
originDay: 42
---
# Context Degradation

## Definition

A specific AI failure mode where the **quality and coherence of an agent's output drop as a session or conversation grows longer**.

## Mechanism

The context window becomes *polluted* with too much information, previous turns, or irrelevant data, causing the LLM's attention mechanism to lose focus on the core instructions or current task.

Understanding this requires the foundational knowledge in [[prereq-basic-llm-understanding]].

## Architectural countermeasure

The direct prevention skill is [[concept-context-architecture]] — supplying agents with exactly the right information at exactly the right time, rather than dumping everything into context.

## Position in the taxonomy

First entry in [[framework-ai-failure-taxonomy]].
