---
id: "entity-perplexity-d45"
type: "entity"
entityType: "product"
canonicalName: "Perplexity"
aliases: ["Perplexity AI"]
source_timestamps: ["00:06:41", "00:18:35"]
tags: ["search", "tooling", "ai"]
related: ["claim-perplexity-cheaper-faster", "action-use-perplexity"]
sources: ["s45-claude-limit-chatgpt-habit"]
sourceVaultSlug: "s45-claude-limit-chatgpt-habit"
originDay: 45
---
# Perplexity

## Description
**Perplexity** is an AI search engine — usable directly via web UI or via API — optimized for low-token research workflows.

## Role in This Source
The speaker strongly recommends using Perplexity for **web research** instead of relying on the native, token-heavy web search tools built into models like Claude or ChatGPT. It is the canonical *Gather Mode* tool in [[concept-gather-vs-focus]] and a core lever in [[framework-clean-conversation]].

## Why It's Cheaper
- Retrieval / scraping happens upstream of the frontier model
- Only the digested answer flows into your main model's context — saving 10K–50K tokens per search
- Perplexity API is reportedly 3–10x cheaper than running native search through Claude/ChatGPT

See [[claim-perplexity-cheaper-faster]] for full numbers and validation.

## Caveats (from enrichment overlay)
OpenAI's SearchGPT/o3 (2026) reportedly closes much of this latency/cost gap on simple queries; the advantage narrows for trivial searches but remains material for complex research.

## Canonical Reference
- https://www.perplexity.ai/
- API docs: https://docs.perplexity.ai/docs/api-reference

## Linked Action
[[action-use-perplexity]]
