---
id: "concept-agent-iteration-speed"
type: "concept"
source_timestamps: ["00:03:30", "00:03:55"]
tags: ["agent-performance", "ux-design"]
related: ["contrarian-agent-babysitting", "action-evaluate-iteration"]
definition: "The principle that an AI agent's practical value is determined more by how quickly a user can correct its mistakes than by its initial accuracy."
sources: ["s51-512k-leaked-code"]
sourceVaultSlug: "s51-512k-leaked-code"
originDay: 51
---
# Agent Iteration Speed vs. Zero-Shot Accuracy

## Definition

The principle that an AI agent's practical value is determined more by **how quickly a user can correct its mistakes** than by its initial accuracy.

## The Demo vs. Reality Gap

Flashy demos often portray AI agents as perfectly autonomous entities that execute complex tasks flawlessly on the first try. The reality, according to [[entity-nate-b-jones|Nate B. Jones]], is that these agents require significant **babysitting**:

- Misread tone
- Pull incorrect technical context
- Draft inaccurate replies
- Miss subtle organizational politics

## The True Value Function

> Practical agent value = (work done correctly without human edit) − (time spent reviewing & correcting wrong actions)

An agent is only a net positive if the user can review its proposed actions and correct them **faster than doing the task manually**.

## UX Implications

This makes the UI/UX of the agent's *approval queue* critical to its success:

- How easily can the user see what the agent is about to do?
- How fast is the rejection/redirection cycle?
- How well does the agent learn from corrections?

See also: [[contrarian-agent-babysitting]] (the contrarian framing) and [[action-evaluate-iteration]] (the recommended evaluation method).
