---
id: "action-train-error-detection"
type: "action-item"
source_timestamps: ["00:07:50", "00:08:15"]
tags: ["critical-thinking", "ai-literacy"]
related: ["framework-nate-7-principles", "prereq-llm-hallucinations", "concept-metacognition"]
action: "Have children review AI outputs specifically to find errors, hallucinations, and logical flaws."
outcome: "Develops a healthy skepticism and the critical evaluation skills needed to supervise AI."
sources: ["s10-vibe-codes"]
sourceVaultSlug: "s10-vibe-codes"
originDay: 10
---
# Train Kids to Catch AI Errors

## Action

Actively train children to **distrust AI outputs** by having them review AI-generated work specifically to find errors, hallucinations, or logical flaws. This is Principle 5 ('Teach kids to catch the machine') of [[framework-nate-7-principles]].

## Prerequisite

Familiarity with how LLMs hallucinate — see [[prereq-llm-hallucinations]]. Kids must understand that LLMs *confidently produce wrong answers*, not just that they sometimes err.

## The Training Pattern

Ask questions like:
- 'What did the AI get wrong here?'
- 'How do we know this is true?'
- 'What evidence is missing?'
- 'Where did the AI hedge or get vague?'
- 'Does this number actually make sense?'

## Why

This builds the critical evaluation skills necessary to supervise machines — the [[concept-metacognition]] of AI fluency. Without it, kids accept whatever the AI produces.

## Outcome

Develops healthy skepticism and the critical evaluation skills needed to supervise AI. The child becomes a director rather than a passenger.

## Implementation Patterns

- 'Hallucination hunt' as a structured exercise: AI generates a paragraph; kid finds the lie
- Cross-checking AI math against a calculator (the irony is intentional — this is the calculator-era trick applied to LLMs)
- Teaching kids to ask for citations and then verify them
