---
type: "synthesis"
spans_days: [2, 4, 6]
tags: ["risk", "platform-policy", "open-questions"]
id: "arc-platform-policy-risk"
sources: ["cross-day"]
---
## What this arc tracks

Multiple workflows in the series operate near or against the published terms of service of the platforms they touch. No speaker addresses this with the depth it deserves. Three days surface the issue as an *open question* rather than a resolved one.

## The three exposures

- **Day 2 (CCC):** [[concept-browser-automation]] via [[entity-claude-in-chrome]] gives Claude DOM-level access to authenticated Instagram. Open question: [[question-instagram-scraping-limits]] — Instagram aggressively polices automated scraping; no benchmark of safe daily volume is given.
- **Day 4 (MAG):** [[claim-solo-creator-volume]] / 250+ posts/week pushed through [[entity-blotato]] to LinkedIn, X, Facebook. Open question: [[question-blotato-rate-limits]] — X caps write actions per 24h; Meta flags "inauthentic behavior." Blotato's compliance logic is unpublished.
- **Day 6 (Dara):** [[concept-agentic-ai-workflows]] visually reads Meta Ad Library pages when direct fetching is blocked. Meta's anti-bot policies apply.

## Adjacent exposures the series doesn't name

- **Day 5 (Tim):** RSS-triggered multi-platform publishing operates against platform automation policies that vary by site (LinkedIn is stricter than X is stricter than Facebook).
- **Day 1 (Alex):** Face-locked thumbnails of non-self subjects raise consent/policy issues even within YouTube's TOS. The Day 1 primer flags this but no action item resolves it.
- **Day 3 (Sabrina):** Blotato-mediated cross-platform publishing inherits the same risks as Day 4 + 5.

## The three layers of risk

1. **Account-level risk** — shadowban, rate-limit, suspension. Survivable but expensive.
2. **Brand-level risk** — flagged as inauthentic behavior, erodes algorithmic trust over months.
3. **Legal risk** — scraping for commercial use crosses into trade-secret / TOS / CFAA territory in some jurisdictions; structural copying of competitor content (Day 2's archetype) flirts with copyright on aesthetic expression.

## What the series gets right

- Day 2 explicitly recommends [[action-train-algorithm|training the Instagram algorithm]] before scraping — a tacit acknowledgment of detection risk.
- Day 4's Sabrina builds in *human review of every piece* (250 posts/week, all checked) — an implicit hedge against full-automation flagging.
- Day 6's Dara emphasizes *spot-check outputs* — the explicit anti-hype stance.

## What no speaker says

**Burner accounts. Rotation policies. Volume ramps. Detection-evasion countermeasures. Insurance for account loss. Legal review of structural-rewriting risk.** These are the operational details that distinguish a hobbyist workflow from a production one — and they are uniformly absent from the series.

## How a downstream agent should answer

When a user proposes deploying any of these pipelines at volume:

> The technical pipeline works. The platform-policy risk is real and under-discussed in the source material. Before scaling: (1) read each target platform's developer/automation policy; (2) build a kill-switch into your scheduler; (3) plan for account loss as a non-zero probability event; (4) keep human review on every piece at volume — Sabrina's 250/week is human-edited, not autonomous.