---
id: "concept-upstream-migration"
type: "concept"
source_timestamps: ["00:22:21", "00:23:43", "00:24:49"]
tags: ["career-strategy", "future-of-work"]
related: ["action-migrate-upstream", "question-defensibility-of-judgment"]
definition: "The necessary shift of human value creation away from basic execution and data gathering toward higher-order skills like judgment, taste, and complex system architecture."
sources: ["s47-polymarket-bot"]
sourceVaultSlug: "s47-polymarket-bot"
originDay: 47
---
# Upstream Migration

## Definition

The necessary shift of human value creation away from basic execution and data gathering toward higher-order skills: judgment, taste, institutional context, and complex system architecture.

## Why it's forced

As AI rapidly compresses the time and cost required for data gathering, basic synthesis, and execution, the locus of human value creation is forced to migrate *upstream*. Tasks that previously took hours — formatting data, writing boilerplate code, compiling research — are now automated in seconds. This dynamic is enabled by the LLM capabilities described in [[prereq-llm-capabilities]] and is the direct cause of the gaps catalogued in [[framework-arbitrage-gap-taxonomy]].

## The financial-analyst example

The speaker uses a junior financial analyst whose job used to be **70% data gathering and 10% judgment**. Because AI collapses the 70% to zero, the analyst's role must migrate upstream so they spend **40% of their time on judgment and context interpretation**. The gap shifts from *who can compile the data* to *who can interpret the data in context and make a defensible recommendation*.

Roles that cannot or will not make this upstream migration will be arbitraged out of the market. The corresponding action is [[action-migrate-upstream]].

## Open question

Is upstream permanently defensible? See [[question-defensibility-of-judgment]] — frontier models (e.g., the rumored [[entity-claude-mythos-d47]]) may eventually compress judgment too. The Enrichment Overlay notes Stanford HAI's view that current LLMs still fail on true reasoning (e.g., GPQA misinterpretations), leaving human judgment defensible *longer* — but not necessarily forever.
