---
id: "claim-fluency-not-competence"
type: "claim"
source_timestamps: ["00:08:04", "00:08:38"]
tags: ["psychology", "evaluation"]
related: ["concept-confidently-wrong", "quote-fluency-competence"]
confidence: "high"
testable: false
speakers: ["Nate B. Jones"]
validation: "Supported; AWS and others highlight the need to evaluate beyond fluent outputs."
sources: ["s42-job-market-split"]
sourceVaultSlug: "s42-job-market-split"
originDay: 42
---
# Fluency does not equal competence in AI outputs.

## Claim

Humans naturally **conflate fluent, confident communication with factual correctness**. Because AI models do not exhibit human 'tells' of uncertainty (stumbling, hesitation), practitioners often incorrectly assume an AI's output is right simply because it is well-written.

## Confidence

- **Speaker confidence**: high.
- **Testable**: not directly (it is an explanatory claim about cognition and AI behaviour).
- **External validation**: **Supported**. [[entity-aws]] and other practitioner sources highlight the need to evaluate beyond fluent outputs in agent systems — orchestration must verify intent correctness despite confident responses.

## Related

- [[concept-confidently-wrong]]
- [[quote-fluency-competence]]
