---
id: "contrarian-post-training-over-intelligence"
type: "contrarian-insight"
source_timestamps: ["00:11:05"]
tags: ["ai-development", "model-evaluation", "contrarian"]
related: ["claim-post-training-beats-raw-intelligence"]
challenges: "The assumption that scaling laws and raw parameter counts are the only path to autonomous AI agents."
sources: ["s16-openclaw-saga"]
sourceVaultSlug: "s16-openclaw-saga"
originDay: 16
---
# Contrarian: Post-Training Over Raw Intelligence

## Conventional View

Building better agents requires fundamentally smarter, larger foundation models with higher parameter counts. Scaling is destiny.

## Contrarian Insight

[[entity-peter-steinberger-d16]] argues current models are already **smart enough**. The actual bottleneck is **post-training** — specifically, training models to:

- Write correct code over **long contexts**
- Recover from errors mid-task
- Reliably interact with tools, APIs, and shells
- Persist toward goals over multi-step trajectories

## What It Challenges

The assumption that scaling laws and raw parameter counts are the only path to autonomous AI agents.

## Connected Claim

See the underlying [[claim-post-training-beats-raw-intelligence]].

## Steelman of the Counter-Argument

Enrichment review: o1/o3 reasoning models that scale **inference-time compute** still beat post-trained agents on SWE-Bench (75%+ solve rate). On novel tasks where post-training data is sparse, raw reasoning generalizes better. The truth is probably 'both/and': post-training is necessary but not sufficient.
