---
id: "question-liability-legal-precedent"
type: "open-question"
source_timestamps: ["19:04:00", "19:25:00"]
tags: ["legal", "regulation"]
related: ["concept-vertical-liability"]
resolutionPath: "Landmark legal rulings involving AI-generated contracts, financial losses, or medical malpractice."
sources: ["s28-5-safe-places"]
sourceVaultSlug: "s28-5-safe-places"
originDay: 28
---
# How Will Courts Handle AI Liability?

## The Question

While [[claim-liability-cannot-be-automated|AI cannot be sued or jailed]], **the exact legal mechanisms for assigning liability when an autonomous agent makes a catastrophic error** (e.g., in finance or medicine) remain untested and unresolved in the broader legal system.

## Open Sub-Questions

- Does liability attach to the model provider (OpenAI/Anthropic), the deploying business, or the end user?
- How does the EU AI Act's 'human oversight' requirement translate into specific liability assignments?
- Can blockchain/DAO smart-contract liability mechanisms substitute for human absorption in narrow domains?

## Resolution Path

Landmark legal rulings involving AI-generated contracts, financial losses, or medical malpractice.

## Why It Matters

The entire [[concept-vertical-liability|Liability vertical]] depends on courts reaffirming that *humans* must absorb risk. If on-chain or insurance-pool mechanisms get legal recognition as substitutes, the strict version of [[claim-liability-cannot-be-automated]] erodes.


## Related across days
- [[claim-liability-cannot-be-automated]]
- [[concept-vertical-liability]]
- [[question-autonomous-ownership]]
