---
id: "contrarian-literal-feels-dumber"
type: "contrarian-insight"
source_timestamps: ["00:00:00"]
tags: ["user-experience", "model-behavior", "contrarian-insight"]
related: ["concept-literal-instruction-following"]
challenges: "The assumption that strict instruction adherence equates to a better or 'smarter' user experience in conversational AI."
sources: ["s12-opus-47"]
sourceVaultSlug: "s12-opus-47"
originDay: 12
---
# Contrarian: Literal instruction following makes models feel 'dumber'

## What Conventional Wisdom Says

A model that follows instructions perfectly is 'smarter' and more aligned.

## What the Speaker Argues

Because users are accustomed to models inferring unstated intent and formatting, [[entity-claude-opus-4-7-d12|Opus 4.7]]'s strict [[concept-literal-instruction-following|literalness]] actually makes it **feel less helpful and 'dumber' to casual users**, even though it is technically executing the prompt more accurately.

## What This Challenges

The assumption that strict instruction adherence equates to a better or 'smarter' user experience in conversational AI.

## Implications

- **For Anthropic**: This is a deliberate choice — trade casual-chat delight for enterprise-pipeline reliability.
- **For users**: The gap is a **prompting skill gap**. Users must adapt by being more explicit (see [[action-front-load-intent]]).
- **For the industry**: Smartness ≠ helpfulness. The two have been conflated by the chat-interface era.

## Counter-Counterpoint

From the enrichment overlay's external perspective: literalness is *also* a feature for benchmark performance — strict adherence prevents over-inference errors and rewards literal test-passing. The 'dumber' feel is a user adaptation issue, not a model flaw.

## Cross-References

- Concept: [[concept-literal-instruction-following]]
- Action: [[action-front-load-intent]]
- Claim: [[claim-combative-model]]
