Beacon now exposes a public documentation surface instead of a single internal-looking notes page. Use this hub to get to quickstart, API, MCP, auth, security, deployment, architecture, roadmap, and legal guidance.
Beacon is a durable web research agent with persistent memory. It plans topic-specific searches, runs web retrieval, synthesizes cited reports, stores facts and URLs per topic, and turns reruns into deltas instead of repeating the same research from scratch.
Expands a topic into targeted query plans using the scout model.
Stores seen URLs, facts, summaries, and run history per account or trial session.
Runs the workflow durably with idempotent steps, durable sleep, and fallbacks.
Run Beacon locally, open the trial flow, and understand the main app surfaces in a few minutes.
Learn what each research framework does, when to use it, and how it changes Beacon's search and synthesis behavior.
Use the authenticated HTTP surface to create research runs and read run state from code.
Connect Beacon to external AI clients over the MCP transport route.
Understand Clerk auth, public routes, and how Beacon scopes account-private data.
See the actual run caps applied to research, MCP access, and login attempts.
Review credential handling, account privacy, and current operational boundaries.
Configure env vars and workflow runtime expectations for local or hosted deployment.
Read how context, memory, harness, models, and workflow runtime fit together.
See what is shipped today versus what is still planned or intentionally missing.
Core operating rules and project guardrails for Beacon agents.
Beacon skill spec used for skills-based orchestration and behavior.
Project-level guidance and implementation context documentation.
Full context/memory/harness architecture mapping.
Per-request context strategy and optimization notes.
Cross-session memory design and persistence patterns.
Beacon's main private write surface is POST /api/briefs. The sample below matches the current backend behavior and the supported framework-driven flow.
curl -X POST http://localhost:3000/api/briefs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <session-or-app-auth>" \
-d '{
"topic": "AI coding agent benchmarks in 2026",
"objective": "Compare major agent platforms and product positioning",
"focus": "pricing, enterprise trust signals, SDK maturity",
"source": "dashboard",
"depth": "deep",
"timeframe": "30d",
"reportStyle": "memo",
"frameworkId": "market-map"
}'The current research workflow is optimized for concise operator briefs, not maximum-length analyst reports. In workflows/research.ts, the synthesis prompt currently says Max 600 words and the generation call uses maxTokens: 1500. That combination keeps output fast and compact.
Short report instruction, executive-summary-first format, and moderate output token budget.
Add a deep mode, remove the 600-word rule, raise max tokens, and require more evidence-driven sections.
Recommended next product change:
- keep "executive" as the concise default
- add a "deep" or "analyst" mode
- require sections for:
market landscape
evidence table
contradictions
implications
open questions
- raise synthesis token budget for deep modeFramework choice is behavioral, not cosmetic. Each framework contributes its own planning and synthesis hints so the same topic can be investigated through different research lenses.
Read a longer explanation for every framework in plain language and in technical terms, including when to use it and what Beacon changes under the hood.
Focus on the progress users are trying to make, not the features they request.
Validate that a real, painful problem exists before investing in solutions.
Map the outcome → opportunity → solution hierarchy to avoid premature solution framing.
Drill past symptoms to find the systemic root cause of a problem.
Reframe problems as design opportunities using open-ended HMW questions.
Diverge/converge twice: first define the right problem, then design the right solution.
Systematically map the full landscape of a problem before committing to any solution.
Understand the user's world through four lenses: what they say, think, do, and feel.
Map the end-to-end experience across all touchpoints to expose friction and delight.
Build evidence-based user archetypes that represent real patterns in the target population.
Classify features by satisfaction impact: must-haves, performance drivers, and delighters.
Observe users in their natural environment to uncover unarticulated needs and workarounds.
Map user activities and tasks to build a shared understanding of what to build first.
Cluster qualitative data to reveal emergent themes and patterns across research findings.
Score initiatives by Reach, Impact, Confidence, and Effort to prioritize objectively.
Quick prioritization using Impact, Confidence, and Ease — ideal for early-stage decisions.
Categorize requirements as Must-have, Should-have, Could-have, or Won't-have this release.
Plot initiatives on a 2×2 to identify quick wins and deprioritize hard low-value work.
Find underserved outcomes where importance is high but current satisfaction is low.
Score options against multiple weighted criteria to reflect actual business priorities.
Simple 2×2 that separates high-value simple wins from complex low-value investments.
Map reinforcing and balancing feedback loops to understand systemic dynamics.
Look beyond events to patterns, structures, and mental models driving outcomes.
Scan macro-environment forces: Political, Economic, Social, Technological, Legal, Environmental.
Identify and prioritize stakeholders by influence and interest to shape engagement strategy.
Identify driving forces for change and restraining forces against it to design interventions.
Recognize common systemic behavior patterns to predict dynamics and avoid traps.
Create uncontested market space by eliminating, reducing, raising, and creating value factors.
Analyze competitive intensity through five structural forces that shape industry profitability.
Map primary and support activities to identify where value is created and where to optimize.
Identify sustainable competitive advantages: network effects, switching costs, cost advantages, intangibles.
Design multi-sided platform dynamics: producers, consumers, core interaction, and network effects.
Break down assumptions to fundamental truths, then reason up to novel solutions.
Create and dominate a new market category rather than competing in an existing one.
Test the riskiest assumption at lowest cost before building anything real.
Measure demand by advertising a feature that does not exist yet and tracking intent signals.
Simulate automated behavior with manual human effort to validate without building automation.
Manually deliver the value proposition to a small group before building any product.
Design controlled experiments to test hypotheses with statistical rigor.
Identify the single metric that best captures the core value delivered to users.
Break the research question into explicit sub-questions and answer each before synthesizing.
Analyze the question through diverse stakeholder lenses to surface blind spots.
Stress-test assumptions and plans by systematically arguing the opposing case.
Map multiple plausible futures to stress-test strategy robustness across different outcomes.
Find structural analogies from other domains and extract transferable insights.
Use systematic questioning to expose assumptions, contradictions, and deeper truths.
Imagine the project has already failed and work backwards to identify risks and failure modes.