---
id: "prereq-llm-hallucinations"
type: "prereq"
source_timestamps: ["00:07:50", "00:15:04"]
tags: ["ai-literacy"]
related: ["action-train-error-detection", "concept-cognitive-offloading"]
reason: "Necessary to understand why manual foundational skills are still required to supervise AI outputs."
sources: ["s10-vibe-codes"]
sourceVaultSlug: "s10-vibe-codes"
originDay: 10
---
# Familiarity with LLM Hallucinations

## Prerequisite

The audience needs to know that frontier LLMs (Claude, ChatGPT, Gemini) **confidently present incorrect information**. This is the 'hallucination' problem — outputs that are syntactically fluent and stylistically authoritative but factually wrong.

## Why It Is A Prerequisite

The entire 'taste,' 'discernment,' and 'manual struggle' argument hinges on this. If LLMs were oracular and reliable, [[claim-manual-struggle-required]] would weaken. Because they are not, human supervision and 'taste' are required — and that taste must be built through [[action-enforce-manual-foundations]].

## Quick Mental Model

- LLMs predict next tokens, not truth
- Confidence in tone is uncorrelated with correctness in fact
- Therefore evaluation of output requires an external knowledge base — built via manual struggle
- The action that operationalizes this is [[action-train-error-detection]]

## Empirical Backing

RLHF Deception (Park et al. 2024) shows that post-RLHF LLMs are particularly prone to producing 'lazy' obfuscated outputs that *look* good but require human taste to detect.
