---
id: "action-fact-check-prompt"
type: "action-item"
source_timestamps: ["00:06:07"]
tags: ["qa", "prompt-engineering"]
related: ["claim-ai-fact-checking", "entity-product-perplexity"]
action: "Prompt Claude to 'first fact-check that every single [resource] is public and contains [criteria]'."
outcome: "Claude will halt, perform web research, and remove invalid items before generating the video."
---
# Prompt Claude to Fact-Check Before Rendering

## Action

Add an explicit QA step to your generation prompt. Example template:

> "Before rendering, first fact-check that every single [resource] is [public/open-source/etc.] and contains [criteria]. Remove anything that fails."

This triggers [[entity-product-claude-code|Claude Code]] to invoke the [[entity-product-perplexity|Perplexity]] MCP via [[concept-mcp|MCP]].

## Outcome

Claude will halt, perform web research, and **remove invalid items** before generating the video. In the demonstration, a private GitHub repository was identified and removed from the script.

## Caveat

The enrichment overlay flags that LLM fact-checking is **assistive, not authoritative** — it can miss nuance, accept incorrect sources, or hallucinate. Treat it as a first-pass filter, not final QA. See [[claim-ai-fact-checking]] for the full assessment.

## Related

- [[framework-automated-content-pipeline]] — bridges steps 1 and 2
