---
type: "synthesis"
sources: ["cross-day"]
tags: ["brand-voice", "knowledge-base", "personalization"]
id: "arc-brand-voice-extraction-spectrum"
---
## The corpus's most contested layer

Every video agrees that generic AI output is the failure mode. Every video proposes a different *method* of injecting the creator's voice. The methods are not mutually exclusive — they form a layered defense.

## The five methods, weakest to strongest

### 1. Prerequisite — brand assets must pre-exist

The floor. Without this, no method works.

- [[prereq-brand-assets]] (Tim) — voice guidelines, personas, product descriptions.
- [[prereq-personal-brand-strategy]] (CCC) — clear target audience and value proposition.
- [[prereq-defined-brand-identity]] (MAG) — content pillars, anti-tone, disclosure norms.

**Without strategy, no extraction method matters.** This is the corpus's most consistent point.

### 2. Persistent workspace — Claude Projects

- [[concept-claude-projects]] (Alex) — attach brand voice docs, past hits, audience profiles, visual references. Context **stays** with the workspace.

### 3. Local-folder brand asset system

- [[concept-brand-asset-system]] (Sabrina) — three artifacts in a local directory: Brand Voice file + Design Kit + Asset Folder. Read by [[concept-claude-code]] on session start. The CLI analog of Claude Projects.

### 4. Knowledge-base priming (retrieve from a corpus of past outputs)

- [[concept-knowledge-base-priming]] (CCC) — paste raw transcripts of past videos, calls, presentations into Notion. The Rewriter agent reads this corpus and matches voice. See [[action-populate-knowledge-base]] and [[quote-knowledge-base-importance]].

### 5. Reverse-engineered interview (most active method)

- [[concept-brand-voice-interview]] (MAG) — Claude **interviews the creator** until 95% confident it can replicate the voice, then saves to a Skill (`/write-content`). See [[action-initiate-brand-interview]].
- Reinforced by [[framework-skill-refinement-loop]] — weekly feedback updates the Skill.

## Method 6 — Dara's inversion

A different paradigm entirely. Dara doesn't extract the creator's voice; she has AI **infer the *audience's* voice from customer reviews** and ad creative.

- [[concept-inferred-target-personas]] — personas from a brand's ads.
- [[framework-persona-research-automation]] — personas from 3,000–5,000 customer reviews with **verbatim quote requirement** as anti-hallucination control.
- The strategic move: cross-reference review-based personas vs ad-inferred personas to find creative gaps.

This is the only method in the corpus that doesn't assume the creator already knows their voice.

## How to layer these

A mature creator stack looks like:

1. Establish strategy (the prereqs).
2. Build a [[concept-brand-asset-system]] or [[concept-claude-projects]] for persistent context.
3. Run the [[concept-brand-voice-interview]] to crystallize a `/write-content` Skill.
4. Add [[concept-knowledge-base-priming]] for high-fidelity voice retrieval.
5. Use Dara's [[framework-persona-research-automation]] to keep audience understanding fresh.
6. Refine weekly via [[framework-skill-refinement-loop]].

No single video prescribes this combined stack. The 6-vault synthesis does.