---
id: "claim-combative-model"
type: "claim"
source_timestamps: ["00:00:00"]
tags: ["model-behavior", "prompt-engineering"]
related: ["concept-literal-instruction-following"]
confidence: "high"
testable: true
speakers: ["Nate B. Jones"]
sources: ["s12-opus-47"]
sourceVaultSlug: "s12-opus-47"
originDay: 12
---
# Opus 4.7 is highly combative and literal

## Claim

[[entity-claude-opus-4-7-d12|Opus 4.7]] is the most **'combative' and 'literal' model [[entity-anthropic-d12|Anthropic]] has released**.

Specifically, it:
- Refuses to infer unstated intent.
- Requires explicit instructions for formatting.
- Pushes back or executes safety weights aggressively if a prompt is ambiguous or touches on sensitive topics.

See the underlying behavior at [[concept-literal-instruction-following]].

## Confidence: High

Observable via direct interaction — the shift from 4.6 to 4.7 is described as immediately apparent.

## Testable: Yes

A/B prompts that rely on inferred formatting or tone — 4.7 should produce stripped-down literal outputs where 4.6 produced inferred-rich ones.

## External Validation Status

**No direct validation** per the enrichment overlay. The general industry trend toward literalness in evaluation contexts (models avoiding inference to pass strict tests) is consistent, but no Opus-specific 4.6→4.7 shift has been independently reported.

## Operator Implications

- Adopt [[action-front-load-intent]] as default prompting practice.
- Expect more 'pushback' on ambiguous prompts; treat that as a feature, not a bug.
- See contrarian framing: [[contrarian-literal-feels-dumber]].

## Cross-References

- Concept: [[concept-literal-instruction-following]]
- Action: [[action-front-load-intent]]
- Quote: [[quote-smartest-combative]]
- Contrarian: [[contrarian-literal-feels-dumber]]
