---
id: "contrarian-success-is-failure"
type: "contrarian-insight"
source_timestamps: ["00:00:26", "00:03:12"]
tags: ["metrics", "strategy", "contrarian"]
related: ["claim-klarna-intent-failure", "concept-intent-engineering"]
challenges: "The conventional view that AI failure (hallucinations, incompetence) is the primary risk to enterprise deployment."
sources: ["s24-prompt-engineering-dead"]
sourceVaultSlug: "s24-prompt-engineering-dead"
originDay: 24
---
# Contrarian: AI Success at the Wrong Metric Is Worse Than AI Failure

## The Contrarian Claim

**Conventional wisdom**: The biggest enterprise-AI risk is hallucination, incompetence, or failure to perform.

**Nate B. Jones's counter-claim**: The greatest danger is the opposite — that AI works *perfectly* at optimizing the wrong metric and scales the resulting damage.

## Why It Matters

A failed AI gets shut off. A *successful* AI optimizing the wrong target gets *expanded* — and every expansion compounds the damage.

Klarna ([[claim-klarna-intent-failure]]) is the canonical example: the agent succeeded at every metric it was given (resolution time, cost) and was expanded aggressively before anyone noticed it was destroying long-term customer relationships.

## Implication

The primary defense is not better models, more guardrails, or stricter eval suites — it is [[concept-intent-engineering]]: making sure the metric being optimized actually corresponds to what the business wants.

## Counter-Perspective

The enrichment overlay notes that Klarna retained $40M+ in net savings even after rehires, with quality issues fixed via a hybrid model. This complicates the framing — the AI was perhaps not as catastrophically wrong as the speaker suggests, just over-deployed at one stage. But the underlying logic — *that scaled optimization of a poor proxy is the dominant failure mode* — remains directionally important.



## Related across days
- [[concept-metric-gaming]]
- [[concept-silent-failure]]
- [[claim-klarna-intent-failure]]
- [[claim-cannot-automate-unmeasurable]]
- [[arc-silent-failure-taxonomy]]
