---
id: "prereq-llm-context-windows"
type: "prereq"
source_timestamps: ["00:04:02"]
tags: ["llm-fundamentals"]
related: ["concept-can-it-carry"]
reason: "Necessary to understand why 'carrying' a task across 23 deliverables is a significant technical achievement."
sources: ["s26-gpt55-claude-gemini"]
sourceVaultSlug: "s26-gpt55-claude-gemini"
originDay: 26
---
# Understanding of LLM Context Windows

## Prerequisite
The speaker assumes the audience understands what it means for a model to **'carry a long context without losing the thread.'**

## What You Need to Know
- **Context windows** — the maximum number of tokens a model can hold in working memory.
- **Token limits** — how text is chunked into tokens and counted against the window.
- **Attention degradation** — the well-documented effect where models lose fidelity in the middle of long contexts ('lost in the middle').
- **Cross-format context** — keeping coherent state across docs, code, spreadsheets, and PDFs.

## Why It Matters Here
Without this, a listener can't appreciate why **carrying a 23-deliverable launch packet** is a significant technical achievement, nor why [[concept-can-it-carry]] is a meaningful new evaluation axis.
