---
id: "action-define-interpretive-boundary"
type: "action-item"
source_timestamps: ["00:11:30", "00:12:35", "00:18:15"]
tags: ["ui-ux", "risk-management"]
related: ["concept-interpretive-boundary", "concept-silent-failure"]
action: "Explicitly label AI outputs to distinguish between factual encoded data and interpretive judgments requiring human review."
outcome: "Prevents organizational overconfidence in AI outputs and ensures humans remain in the loop for critical editorial decisions."
sources: ["s15-block-layoffs"]
sourceVaultSlug: "s15-block-layoffs"
originDay: 15
---
# Define the Interpretive Boundary in UI

## Action

Explicitly label AI outputs to distinguish between factual encoded data and interpretive judgments requiring human review.

## Outcome

Prevents organizational overconfidence in AI outputs and ensures humans remain in the loop for critical editorial decisions.

## How To Do It

To prevent [[concept-silent-failure]], developers and designers must fundamentally change how AI dashboards present information. Currently, systems present all data — both hard facts and guessed correlations — with the same authoritative UI.

You must build an [[concept-interpretive-boundary]] into the system. The UI must explicitly state:

> 'This is factual data we have encoded'

versus

> 'This is an interpretive leap or correlation the model is suggesting.'

By making the system's uncertainty visible, you force human managers to apply their contextual [[concept-editorial-function]] rather than blindly trusting the machine's editorial choices.

## Concrete UI Patterns to Consider

- Distinct typography or color for fact-vs-inference
- Confidence intervals shown on every interpretive output
- Hover tooltips explaining the inference path
- A 'requires human review' flag on novel correlations
- Provenance metadata: which raw data points produced this claim

## Related

- [[concept-interpretive-boundary]]
- [[concept-silent-failure]]
- [[framework-world-model-principles]]
