---
id: "concept-evidence-baseline-collapse"
type: "concept"
source_timestamps: ["00:09:15", "00:10:18"]
tags: ["trust-and-safety", "cybersecurity", "culture"]
related: ["concept-adversarial-twin", "action-update-trust-stack", "claim-trust-stack-obsolete"]
definition: "The destruction of trust in digital visual evidence (screenshots, receipts) because AI has reduced the cost and skill of creating flawless forgeries to zero."
sources: ["s07-chatgpt-images"]
sourceVaultSlug: "s07-chatgpt-images"
originDay: 7
---
# Evidence Baseline Collapse

## Definition

The destruction of trust in digital visual evidence (screenshots, receipts) because AI has reduced the cost and skill of creating flawless forgeries to zero.

## Detail

Consumer internet culture and institutional workflows rely on an **evidence baseline** to establish truth. Historically this baseline included artifacts like:

- screenshots of Slack messages,
- digital receipts,
- photographs of physical signage,
- boarding passes.

Because forging these required specialized software (e.g. Photoshop), time, and skill, they were generally accepted as proof of reality by **journalism fact-checkers, KYC (Know Your Customer) vendors, insurance fraud teams, and legal discovery processes**.

The advent of models with reasoning stacks ([[concept-reasoning-stack-integration]]) **completely obliterates this baseline** by dropping the cost and skill barrier of forgery to zero. A user can now use a free account to generate:

- a flawless, typographically accurate receipt,
- a fake Slack screenshot featuring a specific user's avatar and tone of voice,

all from a natural language prompt. Because the model understands the **structural logic** of these documents, the forgeries do not exhibit typical 'AI tells'. Any verification system relying on cheap digital visual evidence is now vulnerable.

This is the architectural cause of [[claim-trust-stack-obsolete]] and the trigger for [[action-update-trust-stack]]. Every legitimate capability has its [[concept-adversarial-twin]]. The unanswered systems-level problem is captured in [[question-trust-stack-rebuild]].

## Counter-perspective

Provenance standards like C2PA v2.1 + cryptographic hashes (e.g. blockchain-ledgered, Verifiable Credentials) and ensemble classifiers (e.g. Hive Moderation) reportedly recover ~70% detection on AI images — partial mitigation, not restoration of the full baseline.


## Related across days
- [[concept-vertical-trust]]
- [[claim-trust-stack-obsolete]]
- [[claim-liability-cannot-be-automated]]
- [[arc-trust-evidence-collapse]]
