Crayon

Open-source browser-agent sandboxes, built from real-world sessions.

Crayon records how apps actually behave, analyzes flows, and generates functional environments that AI agents can use for repeatable testing, debugging, and training.

View on GitHub

Record Real Sessions

Capture DOM snapshots, network traffic, screenshots, and user interactions from real browser flows.

Generate Production-Like Sandboxes

Transform recordings into runnable frontend and backend replicas with realistic seeded data.

Ship MCP-Native Workflows

Give AI agents structured tools to inspect, test, and control generated environments with confidence.

STEP 01

Record app behavior

Start from a real URL and capture real user journeys with DOM, network, and screenshot context.

Live browser recording Interaction timeline Network request capture
Live Recording REC
Clicked "Sign in" button

STEP 02

Generate a runnable replica

Crayon analyzes the recording and synthesizes a realistic frontend, backend, and data model.

Frontend + backend generation Seeded data and schemas Production-like test environment
Generation Pipeline
FrontendReady
BackendReady
DataSeeding...

STEP 03

Run agents against sandboxes

Use MCP tools to let agents inspect, act, and evaluate behavior in deterministic environments.

MCP-native controls Repeatable evaluation loops Safer experimentation
Agent + MCP Runtime
Agent asks: "Validate checkout flow"
MCP tool returns sandbox state + logs
Evaluation result: pass / fail

Replace brittle mocks with environments grounded in real browser behavior. Iterate faster on agent prompts, tools, and evaluation loops with reproducible sandboxes that feel like production.

Explore source code, docs, and examples