---
id: "framework-reference-ui-workflow"
type: "framework"
source_timestamps: ["00:19:50", "00:19:59"]
tags: ["ui-design", "software-engineering", "agentic-workflows"]
related: ["entity-images-2-0", "entity-codex", "action-mockup-to-code", "concept-visual-taste-vs-density"]
steps: ["Taste (Generate visual mockup via Images 2.0 or Opus)", "Build (Use Codex to write code matching the reference)", "Ship (Test and deploy working UI)"]
sources: ["s26-gpt55-claude-gemini"]
sourceVaultSlug: "s26-gpt55-claude-gemini"
originDay: 26
---
# Reference-to-Code UI Workflow

## Purpose
A multi-model workflow to **bypass an LLM's inability to invent good visual taste from a blank prompt**. Solves the [[concept-visual-taste-vs-density|visual taste vs information density tradeoff]] by using two models in series.

## The Three Steps

### 1. Taste — Generate Mockup
Use a **visually strong model** to create a high-fidelity visual target:
- [[entity-images-2-0|Images 2.0]] for image-based mockups.
- [[entity-claude-opus-4-7|Claude Opus 4.7]] for design-language work.

Prompt the visual model with the **niche need** (audience, brand, density requirements). Iterate until the mockup hits production quality.

### 2. Build — Codex Implementation
Pass the generated image into [[entity-codex|Codex]] and instruct [[entity-gpt-5-5|GPT-5.5]] to **build the application shell matching the visual reference**. Codex's strengths in file editing, code execution, and browser-driven verification carry the implementation.

### 3. Ship — Working UI
Test and verify the UI:
- Run linters and type checks.
- Drive the browser through key flows.
- Verify visual fidelity against the original mockup.

Result: a **functional application that maintains high visual quality** without relying on the coder model's raw aesthetic taste.

## Operational Form
See [[action-mockup-to-code]] for the routing rule encoding this workflow.


## Related across days
- [[concept-visual-taste-vs-density]]
- [[claim-opus-visual-superiority]]
- [[claim-design-leverage-shift]]
- [[arc-anthropic-vs-openai-comparative]]
