Validate a problem, track what changes, and keep the evidence.
Beacon is a durable web research agent for questions that need more than one pass. It fans out deep parallel searches, applies structured research frameworks, saves everything it learned, and turns reruns into delta reports instead of starting from zero.
Use Beacon when a topic needs a baseline now and a smarter rerun later.
47 research lenses reshape both search planning and final synthesis.
Research that needs a second run.
Check whether the pain is real, who already complains about it, what alternatives exist today, and whether the timing is right — before you spend the weekend building.
Applying Jobs To Be Done produces different search plans and reports than RICE or Porter's Five Forces. The framework changes what counts as evidence.
Beacon keeps the prior evidence base, skips URLs it already knows, and leads the next report with new movement instead of repeating old summaries.
From question to reusable evidence base.
Set the topic, objective, and depth. Beacon starts by loading prior memory for the same topic before planning new searches.
Frameworks like JTBD, RICE, SWOT, or Porter change both the search plan and the final synthesis, so Beacon investigates with a clear lens instead of generic summarization.
Parallel search agents collect evidence, a validator checks contradictions, and the final cited report is written back into durable memory for the next run.
Why Beacon never restarts from zero.
Every URL, fact, and summary becomes a node in a persistent graph. Each rerun strengthens the existing mesh — so run three is faster, deeper, and more targeted than run one. The topology below is live: signal packets show data flowing between layers in real time.
Query plans, compressed SERP results, and memory context are assembled fresh each run — optimized so the model always works from the most relevant slice of knowledge, not a raw data dump.
Seen URLs, extracted facts, run summaries, and source attribution are stored per topic in Redis with 30-day TTL. Later runs skip known URLs and lead with what changed — no repeated baselines.
Workflow SDK step idempotency, structured logging, and Vercel durable execution mean partial failures retry cleanly, long runs survive restarts, and every step is observable and auditable.
Agents and Skills.
Beacon relies on a distributed multi-agent system to handle parallel research tasks. Specialized agents for searching, validating, and synthesizing operate concurrently, guided by the central orchestration engine.
Skills extend Beacon's core capabilities, allowing it to adapt to specific frameworks, external tools, or custom validation rules. Skills are implemented as declarative Markdown modules.
Validate the problem before you build.
Use Beacon to pressure-test whether your hackathon idea solves a real problem before you spend the weekend building it. Check whether the pain is real, who already complains about it, what alternatives exist, and whether recent market movement suggests urgency.
Run Validation Brief →Use Beacon from anywhere.
Run the public sample flow and test frameworks before account setup.
Operate briefs, memory, API keys, and logs from the main product surface.
Inspect provenance and see how sources, reports, and reruns connect.
Trigger Beacon from scripts, workflows, Claude Desktop, or Cursor.
Trigger research runs, poll status, and read reports from any script, workflow, or agent. Same depth, framework, and memory engine as the dashboard.
curl -X POST /api/briefs \
-H "Content-Type: application/json" \
-d '{
"topic": "AI coding agents 2026",
"objective": "Compare platforms and pricing",
"depth": "deep",
"timeframe": "30d",
"reportStyle": "executive"
}'