---
type: "synthesis"
spans_days: [1, 4, 6]
tags: ["mental-models", "diagnosis", "contrarian"]
id: "arc-mental-model-diagnoses"
sources: ["cross-day"]
---
## What this arc tracks

Three of the speakers independently coin a diagnostic phrase for the same underlying error. Read together, they triangulate the most important mental model in the series.

## The three diagnoses

- **Alex (Day 1):** *"You're treating Claude like a vending machine. Input prompt, output content. That's ChatGPT thinking."* — [[quote-vending-machine]] / [[claim-vending-machine-usage]] / [[contrarian-vending-machine]].
- **Sabrina Ramonov (Day 4):** *"Most people are still treating AI like a faster typewriter. The unlock is using it to build systems that compound without you."* — [[quote-faster-typewriter]] / [[claim-ai-faster-typewriter]] / [[insight-stop-prompting-from-scratch]].
- **Dara Denney (Day 6):** *"Most creative strategists and digital marketers are using AI completely wrong... it's because they're asking AI to do the wrong job."* — [[quote-ai-wrong-job]] / [[claim-ai-wrong-job]] / [[contrarian-ai-replacement]].

## What they share

All three are diagnoses of **shallow, episodic, one-shot prompt usage** — a workflow with no persistence, no compounding, and no role-clarity about what AI is good at.

## What they disagree about

The *prescription* differs subtly:

- **Alex (Vending Machine):** Build infrastructure. Skills + Projects + MCPs. The fix is **architectural**.
- **Sabrina (Faster Typewriter):** Build a compounding system. Skill + weekly refinement. The fix is **temporal** — make the AI smarter over time.
- **Dara (Wrong Job):** Reassign the role. Stop asking AI for strategy; ask it for research. The fix is **role-redefinition** — what work you delegate matters more than how you delegate it.

These are not mutually exclusive. The strongest synthesis combines all three: **architect a persistent system, compound it through feedback, and delegate to it only the work it does well.**

## The fourth, less-cited diagnosis

Alessio (Day 2) makes a related but distinct claim: [[contrarian-ai-generation-vs-rewriting]] — AI is bad at generating original viral concepts and excellent at pattern-extraction and style-transfer. This is a *more specific* version of Dara's "wrong job" framing.

## How to recognize the misuse pattern

A user is in vending-machine / faster-typewriter / wrong-job mode when they:
- Rewrite the prompt from scratch every session.
- Treat each output as the final deliverable rather than as raw material.
- Ask AI for the strategic call, not the research feeding the call.
- Don't have a named, addressable Skill / Project / Knowledge Base.
- Can't tell you what their AI "knows" about them across sessions.

## The unified prescription a downstream agent should give

> Build a persistent system (Alex's contribution), compound it weekly via feedback (Sabrina's contribution), and only delegate the work AI is structurally good at — research, pattern matching, structural extraction, style transfer (Dara and Alessio's contribution). The three diagnoses are not competitors; they are the architectural, temporal, and role-redefinition axes of the same fix.