adjust
Beacon
Validate. Track. Remember.
Durable research agent — Vercel Zero to Agent 2026

Validate a problem, track what changes, and keep the evidence.

Beacon is a durable web research agent for questions that need more than one pass. It fans out deep parallel searches, applies structured research frameworks, saves everything it learned, and turns reruns into delta reports instead of starting from zero.

Built for repeated research

Use Beacon when a topic needs a baseline now and a smarter rerun later.

Frameworks change the investigation

47 research lenses reshape both search planning and final synthesis.

Built for

Research that needs a second run.

rocket_launch
Hackathon validation
Validate a problem before you build.

Check whether the pain is real, who already complains about it, what alternatives exist today, and whether the timing is right — before you spend the weekend building.

Try validation brief
schema
Framework-led deep research
Same topic, different method, different answer.

Applying Jobs To Be Done produces different search plans and reports than RICE or Porter's Five Forces. The framework changes what counts as evidence.

Browse frameworks
autorenew
Delta tracking
Run once for the baseline. Rerun for what changed.

Beacon keeps the prior evidence base, skips URLs it already knows, and leads the next report with new movement instead of repeating old summaries.

See rerun flow
Workflow

From question to reusable evidence base.

01
Define the question

Set the topic, objective, and depth. Beacon starts by loading prior memory for the same topic before planning new searches.

02
Choose the research method

Frameworks like JTBD, RICE, SWOT, or Porter change both the search plan and the final synthesis, so Beacon investigates with a clear lens instead of generic summarization.

Framework controls the research lens
03
Search, validate, and save

Parallel search agents collect evidence, a validator checks contradictions, and the final cited report is written back into durable memory for the next run.

Deep mode fans out parallel searches across landscape, competitive, and community signals, then uses a validator pass to merge contradictions into one cited report.
Memory architecture

Why Beacon never restarts from zero.

Every URL, fact, and summary becomes a node in a persistent graph. Each rerun strengthens the existing mesh — so run three is faster, deeper, and more targeted than run one. The topology below is live: signal packets show data flowing between layers in real time.

hub
Loading graph engine
Context layer
What the model sees per request

Query plans, compressed SERP results, and memory context are assembled fresh each run — optimized so the model always works from the most relevant slice of knowledge, not a raw data dump.

Memory layer
What compounds across runs

Seen URLs, extracted facts, run summaries, and source attribution are stored per topic in Redis with 30-day TTL. Later runs skip known URLs and lead with what changed — no repeated baselines.

Harness layer
What keeps the system reliable

Workflow SDK step idempotency, structured logging, and Vercel durable execution mean partial failures retry cleanly, long runs survive restarts, and every step is observable and auditable.

Extensibility

Agents and Skills.

Multi-Agent Architecture

Beacon relies on a distributed multi-agent system to handle parallel research tasks. Specialized agents for searching, validating, and synthesizing operate concurrently, guided by the central orchestration engine.

Pluggable Skills

Skills extend Beacon's core capabilities, allowing it to adapt to specific frameworks, external tools, or custom validation rules. Skills are implemented as declarative Markdown modules.

For hackathon builders

Validate the problem before you build.

Use Beacon to pressure-test whether your hackathon idea solves a real problem before you spend the weekend building it. Check whether the pain is real, who already complains about it, what alternatives exist, and whether recent market movement suggests urgency.

Run Validation Brief →
Recommended validation frameworks
Jobs To Be DoneProblem / Solution FitOpportunity Solution TreeSWOT AnalysisPESTLERICE ScoringMarket MapBlue Ocean
Is this problem real and documented in the wild?
Who feels the pain most — and are they already vocal about it?
What solutions already exist, and what do they miss?
Is the timing right based on recent market movement?
Access surfaces

Use Beacon from anywhere.

HTTP API

Trigger research runs, poll status, and read reports from any script, workflow, or agent. Same depth, framework, and memory engine as the dashboard.

curl -X POST /api/briefs \
  -H "Content-Type: application/json" \
  -d '{
    "topic": "AI coding agents 2026",
    "objective": "Compare platforms and pricing",
    "depth": "deep",
    "timeframe": "30d",
    "reportStyle": "executive"
  }'