---
id: "action-pair-same-models"
type: "action-item"
source_timestamps: ["00:06:36", "00:07:15"]
tags: ["architecture", "model-selection"]
related: ["concept-model-empathy", "concept-meta-task-agent-split", "entity-product-claude", "entity-product-chatgpt"]
action: "Use the same foundational model for both the Meta-Agent and the Task Agent to leverage shared context."
outcome: "Higher quality harness rewrites and faster optimization due to implicit understanding of model behavior."
sources: ["s04-karpathy-agent-700"]
sourceVaultSlug: "s04-karpathy-agent-700"
originDay: 4
---
# Pair Meta-Agents and Task Agents from the same model family

## Action
Use the same foundational model for both the Meta-Agent and the Task Agent to leverage shared context.

## Outcome
Higher quality harness rewrites and faster optimization due to implicit understanding of model behavior.

## Detail
When designing a [[concept-meta-task-agent-split|dual-agent architecture]], leverage [[concept-model-empathy|Model Empathy]] by ensuring both agents are built on the same foundational model family — e.g., [[entity-product-claude|Claude]] optimizing Claude, or [[entity-product-chatgpt|ChatGPT]] optimizing ChatGPT.

## Why
Shared weights, training data, and RLHF tuning give the Meta-Agent implicit understanding of:
- The Task Agent's reasoning patterns
- Its specific failure modes
- Its formatting preferences

Enrichment overlay benchmark: ~15-20% better harness-tuning performance.

## Caveat
This is a strong default, not an absolute law. Fine-tuned cross-model adapters can close the gap, per the enrichment overlay's counter-perspectives.
