---
type: "synthesis"
spans_days: [1, 2, 3, 4, 5, 6]
tags: ["brand-voice", "personalization", "knowledge-base"]
id: "arc-anti-generic-imperative"
sources: ["cross-day"]
---
## What this arc tracks

Every single video starts from the same diagnosis: **AI output is generic without injected personal/brand context.** Every single video prescribes a different mechanism for injecting that context. Together they form a typology of anti-generic techniques.

## Six mechanisms

1. **Day 1 — Persistent project context.** [[concept-claude-projects]] holds brand voice docs, past hits, audience profiles. [[concept-claude-skills-d1|Skills]] inherit this context when run inside a Project.
2. **Day 1 — Identity preservation prompts.** [[concept-face-lock]] hardcodes "treat this reference image as the canonical face" into every Higgsfield call.
3. **Day 2 — Retrieval-augmented voice transfer.** [[concept-knowledge-base-priming]] feeds the rewriter agent a Notion corpus of past transcripts/calls/presentations so it imitates the user's cadence. Action: [[action-populate-knowledge-base]].
4. **Day 3 — Local brand asset triad.** [[concept-brand-asset-system]] = Brand Voice doc + Design Kit + Asset Folder, all in the project directory. Action: [[action-setup-brand-assets]].
5. **Day 4 — Reverse-engineered interview.** [[concept-brand-voice-interview]] flips the dynamic — Claude interviews the creator to 95% confidence, then crystallizes the result into a mutable Skill. Action: [[action-initiate-brand-interview]].
6. **Day 5 — Pre-loaded brand assets.** [[prereq-brand-assets]] (voice guidelines, personas, product/service descriptions, visual assets) sit in the local skill folder. Validation note: garbage in, garbage out.
7. **Day 6 — Verbatim quote requirement.** [[framework-persona-research-automation]] requires AI to pull **real customer quotes** per persona. This is the *anti-hallucination* version of the anti-generic imperative — ground personas in actual customer voice, not AI stereotype.

## What converges

All six mechanisms commit to the same principle: **AI scales context; it does not invent context.** The creator must bring something proprietary (voice, frameworks, customer data, brand assets, visual identity) and the AI's job is to apply that proprietary asset at volume.

## What diverges

- **Direction of capture:** Day 4 pulls voice *out* of the creator via interview; Days 1, 2, 3, 5 require the creator to push voice *in* via documents.
- **Static vs. mutable:** Day 1 (static text file), Day 2 (static JSON), Day 3 (static folder) vs. Day 4 (mutable via `update the skill`). See [[arc-skill-mutability-compounding]].
- **Voice vs. visual:** Days 1, 4, 5 emphasize *visual* brand systems; Days 2, 3, 4, 6 emphasize *linguistic* voice; only Day 1's Face Lock and Day 3's Brand Asset System integrate both.

## The prerequisite no one names but everyone assumes

**You must already have a brand identity to inject.** See [[prereq-personal-brand-strategy]] (Day 2) and [[prereq-defined-brand-identity]] (Day 4) — the two notes that make this prerequisite explicit. The other days assume it silently. The unified diagnosis: AI is downstream of strategy, not a substitute for it.