---
id: "claim-training-models-not-moat"
type: "claim"
source_timestamps: ["04:42:00", "05:04:00", "05:15:00"]
tags: ["moats", "model-training", "infrastructure"]
related: ["contrarian-training-not-moat", "entity-replit"]
speakers: ["Nate B. Jones"]
confidence: "high"
testable: true
sources: ["s28-5-safe-places"]
sourceVaultSlug: "s28-5-safe-places"
originDay: 28
---
# Training Your Own Model Is Not the Actual Moat

## Claim

Contrary to conventional wisdom, training a custom model (as done by Cursor or [[entity-replit|Replit]]) is **not** what separates survivors from casualties. Startups cannot out-train massive labs like Anthropic, OpenAI, or Google.

The true moat lies in **structural assets the model providers lack**:

- Owning the **runtime** — the actual compute environment where code executes (Replit's edge).
- Owning the **deployment infrastructure** — production hosting at scale (Vercel's edge).

## Confidence: High

## Testable: Yes

## Validation (per enrichment)

**Validated.** Startups like [[entity-replit|Replit]] and Cursor acknowledge fine-tuning helps but emphasize runtime/infra ownership as key. Labs like OpenAI/Anthropic dominate base model training. Consensus: data/runtime > custom models for most apps.

## Counter-Position

Cohere and Anthropic founders argue enterprise fine-tunes create data moats. The qualified version: training is not the moat *for app-layer startups*, but it can compound when paired with proprietary runtime/data ownership.

## See Also

- Contrarian framing: [[contrarian-training-not-moat]]
- Diagnosis: [[concept-build-layer-collapse]]
