---
id: "claim-trust-stack-obsolete"
type: "claim"
source_timestamps: ["00:10:30", "00:10:46"]
tags: ["security", "fraud-prevention"]
related: ["concept-evidence-baseline-collapse", "action-update-trust-stack", "quote-trust-stack-update"]
speakers: ["Nate B. Jones"]
confidence: "high"
testable: true
sources: ["s07-chatgpt-images"]
sourceVaultSlug: "s07-chatgpt-images"
originDay: 7
---
# Current Trust Stacks Are Obsolete

## Claim

Any institution relying on visual evidence — **journalism fact-checkers, KYC vendors, insurance fraud teams, legal discovery** — currently operates on an obsolete baseline. Because the cost and skill required to generate flawless forgeries of receipts, documents, and screenshots have dropped to zero (see [[concept-evidence-baseline-collapse]] and [[concept-adversarial-twin]]), these visual artifacts can no longer be trusted by default.

Current mitigation efforts by AI companies — **content credentials and watermarking** — are insufficient because they do not survive basic manipulations like taking a screenshot or cropping the image.

## Speaker confidence

High.

## External validation (enrichment overlay)

**Strongly supported.** AI-generated forgeries (receipts, screenshots) evade detection post-screenshot/crop. C2PA watermarks fail under manipulation in tested conditions (~90%+ bypass rate). Institutions are shifting toward **cryptographic provenance** (blockchain-ledgered hashes, Verifiable Credentials) and **behavioral analysis**. KYC vendors like Onfido report ~30% fraud rise from AI images since 2024.

## Counter-balance

C2PA v2.1 + blockchain attestations and ensemble classifiers (e.g. Hive Moderation) recover ~70% AI-image detection — partial mitigation, not full restoration. Drives [[action-update-trust-stack]] and the open question [[question-trust-stack-rebuild]].


## Related across days
- [[concept-evidence-baseline-collapse]]
- [[concept-vertical-trust]]
- [[action-update-trust-stack]]
- [[claim-illusion-of-judgment]]
- [[arc-trust-evidence-collapse]]
