---
id: "contrarian-agent-engineering-is-not-new"
type: "contrarian-insight"
source_timestamps: ["00:04:45", "00:05:10", "00:09:30"]
tags: ["paradigm-shift", "software-engineering", "contrarian"]
related: ["framework-rob-pike-agent-rules", "prereq-software-engineering-fundamentals"]
challenges: "The conventional view that AI agents require entirely new, complex architectural paradigms and prompt engineering frameworks."
sources: ["s41-nvidia-open-sourced"]
sourceVaultSlug: "s41-nvidia-open-sourced"
originDay: 41
---
# Contrarian: Agentic engineering is not a new paradigm

## Contrarian Position

> Agentic engineering is **not** a new paradigm. It is traditional software engineering, only more so.

## Conventional View Being Challenged

The prevailing industry hype suggests that building AI agents requires:
- A completely new computing paradigm
- Novel "agentic" frameworks (LangGraph, CrewAI, AutoGen, etc.)
- Sophisticated prompt-engineering practices
- Multi-agent orchestration as a default

## The Counter-Argument

[[entity-nate-b-jones]] argues the **opposite**: the shift to AI agents makes decades-old fundamental rules *more* important, not less. The keys to agentic success are:

1. **Good data engineering** — see [[concept-data-dominated-agent-design]]
2. **Simple algorithms / simple architectures** — see [[claim-fancy-algorithms-fail-agents]]
3. **Strict linting and software hygiene** — see [[concept-agent-environment-readiness]]
4. **Measurement before optimization** — see [[action-measure-before-optimizing]]

These are precisely [[entity-rob-pike]]'s 5 Rules from the early 1990s, repackaged for agentic workloads. See [[framework-rob-pike-agent-rules]].

## Why It Matters

If this is correct, organizations should **stop chasing novel agent frameworks** and instead invest engineering effort in: data structures, dev container hygiene, lint configs, test coverage, observability. The framework hype cycle is a distraction from the actual bottleneck.

## Counter-Counter (from enrichment)

Multi-agent systems (AutoGen, CrewAI) do excel at scale on the Berkeley Function-Calling Leaderboard for tool-use tasks. The strong form of this contrarian claim — "never use multi-agent" — is overstated. The defensible form is "don't use multi-agent for small-N tasks."

## See Also

- [[framework-rob-pike-agent-rules]]
- [[prereq-software-engineering-fundamentals]]
- [[contrarian-ai-does-not-teach-itself]] — the companion contrarian
