Framework Guide

Developer-friendly framework docs with quick filtering and deep links.

Find frameworks quickly, jump to the exact section, and expand implementation details only when you need them.

Search docs, support, and public pages
search
Quick Start

How to use frameworks without scrolling forever

1. Filter first

Use category chips + keyword search to narrow down quickly.

2. Pick by decision

Choose based on the decision you need to make, not the framework name.

3. Expand only details

Open the implementation panel only for the frameworks you plan to run.

Suggested developer workflow:
1. Set category (or keep All)
2. Type keyword (e.g. "diamond", "risk", "persona", "market")
3. Open matched framework card
4. Use queryHint + synthesisHint in your run setup
Finder

Find the right framework fast

Showing 47 frameworks.
Category

Discovery & Framing

Category setup
Plain-language read

Use these when the team is still trying to define the actual problem before building.

Technical effect

Biases planning toward root cause, unmet needs, constraints, and problem clarity.

Best use case

Best for early-stage validation and scope framing.

Jobs to Be Done
jobs-to-be-done

Focus on the progress users are trying to make, not the features they request.

Plain-English explanation

In plain English: Focus on the progress users are trying to make, not the features they request.

When to use it

Use Jobs to Be Done when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for what outcomes and progress users are trying to achieve, what triggers them to seek a solution, and what alternatives they currently use including non-consumption. Prioritize forums, reviews, and interview write-ups over marketing copy to surface motivational context.

Report synthesis hint

Structure findings around three job layers: (1) Functional Job — the core task; (2) Emotional Job — how users want to feel; (3) Social Job — how users want to be perceived. Use "When I [situation], I want to [motivation], so I can [outcome]" framing. Identify key hiring and firing triggers.

Questions this framework helps answer
What decision becomes easier after applying Jobs to Be Done?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Problem/Solution Fit
problem-solution-fit

Validate that a real, painful problem exists before investing in solutions.

Plain-English explanation

In plain English: Validate that a real, painful problem exists before investing in solutions.

When to use it

Use Problem/Solution Fit when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for evidence the problem is real: failure rates, workarounds, user complaints, and forum discussions. Prioritize primary source evidence over vendor claims. Find pain metrics: frequency of occurrence, cost of the problem, and alternatives users have tried and abandoned.

Report synthesis hint

Lead with problem validation evidence scored on three axes: (1) Frequency — how often it occurs; (2) Intensity — how painful it is; (3) Willingness to pay for a fix. Map the solution landscape: existing solutions and why they fail to fully resolve the job. End with a gap/opportunity statement.

Questions this framework helps answer
What decision becomes easier after applying Problem/Solution Fit?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Opportunity Solution Tree
opportunity-solution-tree

Map the outcome → opportunity → solution hierarchy to avoid premature solution framing.

Plain-English explanation

In plain English: Map the outcome → opportunity → solution hierarchy to avoid premature solution framing.

When to use it

Use Opportunity Solution Tree when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for the desired outcome metric, the pain points preventing that outcome, and existing solution approaches. Find what has been tried, what worked partially, and what root causes remain unaddressed.

Report synthesis hint

Structure as a tree: (1) Desired Outcome at the top; (2) Opportunity nodes — unmet needs, pain points, constraints; (3) Solution nodes — existing approaches mapped to each opportunity. Identify which opportunities are most underserved by current solutions and deserve investment.

Questions this framework helps answer
What decision becomes easier after applying Opportunity Solution Tree?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
5 Whys Root Cause
five-whys

Drill past symptoms to find the systemic root cause of a problem.

Plain-English explanation

In plain English: Drill past symptoms to find the systemic root cause of a problem.

When to use it

Use 5 Whys Root Cause when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for documented root cause analyses, post-mortems, and failure case studies. Find patterns across multiple instances of the problem. Search for systemic causes: process failures, incentive misalignments, resource gaps, knowledge gaps.

Report synthesis hint

Present findings as a 5-level causal chain from surface symptom to systemic root cause. For each level, cite evidence. Conclude with: (1) Root cause statement; (2) Why fixing symptoms fails long-term; (3) Highest-leverage intervention point at the root level.

Questions this framework helps answer
What decision becomes easier after applying 5 Whys Root Cause?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
How Might We
how-might-we

Reframe problems as design opportunities using open-ended HMW questions.

Plain-English explanation

In plain English: Reframe problems as design opportunities using open-ended HMW questions.

When to use it

Use How Might We when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for pain points, frustrations, constraints, and workarounds. Find examples of innovation in adjacent domains. Search for latent needs — things users do that were not designed for — and edge case behaviors that reveal unstated requirements.

Report synthesis hint

Reframe each major finding as a "How Might We…" design question. Group by reframing type: (1) Amplify the positive; (2) Eliminate the negative; (3) Challenge assumptions; (4) Draw on analogies from other domains. Prioritize the 5-7 most promising HMW questions for further exploration.

Questions this framework helps answer
What decision becomes easier after applying How Might We?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Double Diamond
double-diamond

Diverge/converge twice: first define the right problem, then design the right solution.

Plain-English explanation

In plain English: Diverge/converge twice: first define the right problem, then design the right solution.

When to use it

Use Double Diamond when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

First diamond — search broadly for related problems, adjacent needs, and contradictory evidence from diverse stakeholder perspectives. Second diamond — search for existing solutions, their adoption rates, and unmet edge cases that current solutions miss.

Report synthesis hint

Structure as two diamonds: Diamond 1 — (Discover) breadth of problems found; (Define) sharpest problem statement. Diamond 2 — (Develop) solution directions identified; (Deliver) recommended approach with evidence. Be explicit about what was deprioritized and why.

Questions this framework helps answer
What decision becomes easier after applying Double Diamond?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Problem Space Analysis
problem-space-analysis

Systematically map the full landscape of a problem before committing to any solution.

Plain-English explanation

In plain English: Systematically map the full landscape of a problem before committing to any solution.

When to use it

Use Problem Space Analysis when your research decision depends on discovery & framing behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for the full problem landscape: sub-problems, contributing factors, affected populations, and edge cases. Find quantitative severity data. Search for historical context: why has this not been solved yet, and what has been tried.

Report synthesis hint

Present a structured landscape: (1) Problem definition with scope; (2) Sub-problems and relationships; (3) Affected populations by severity; (4) Root contributing factors; (5) Historical solution attempts and why they fell short. End with the highest-leverage intervention point.

Questions this framework helps answer
What decision becomes easier after applying Problem Space Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category

User Research

Category setup
Plain-language read

Use these when you need to understand user behavior, motivation, and friction.

Technical effect

Pushes evidence gathering toward user voice, journeys, and behavioral patterns.

Best use case

Best for UX, onboarding, and product discovery.

Empathy Map
empathy-map

Understand the user's world through four lenses: what they say, think, do, and feel.

Plain-English explanation

In plain English: Understand the user's world through four lenses: what they say, think, do, and feel.

When to use it

Use Empathy Map when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for direct user quotes from forums, reviews, social media, and support logs. Find behavioral data showing what users actually do vs. what they say. Search for emotional cues: frustrations, aspirations, fears, and delights. Prioritize unfiltered primary sources over summaries.

Report synthesis hint

Organize findings into four quadrants: (1) SAYS — direct quotes and stated needs; (2) THINKS — inferred beliefs and mental models; (3) DOES — observed behaviors and actions; (4) FEELS — emotions and motivations. Highlight gaps between stated and actual behavior. Conclude with Pains and Gains.

Questions this framework helps answer
What decision becomes easier after applying Empathy Map?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
User Journey Mapping
user-journey-map

Map the end-to-end experience across all touchpoints to expose friction and delight.

Plain-English explanation

In plain English: Map the end-to-end experience across all touchpoints to expose friction and delight.

When to use it

Use User Journey Mapping when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for step-by-step user workflows, onboarding flows, and task completion paths. Find reviews mentioning specific journey stages: discovery, evaluation, onboarding, use, and support. Search for where drop-offs and complaints cluster along the journey.

Report synthesis hint

Structure as a journey with phases: Aware → Consider → Purchase → Use → Advocate. For each phase: (1) User actions; (2) Emotions and sentiment; (3) Pain points and friction; (4) Touchpoints and channels. Mark moments of highest and lowest satisfaction. Conclude with top 3 optimization opportunities.

Questions this framework helps answer
What decision becomes easier after applying User Journey Mapping?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Persona Development
persona-development

Build evidence-based user archetypes that represent real patterns in the target population.

Plain-English explanation

In plain English: Build evidence-based user archetypes that represent real patterns in the target population.

When to use it

Use Persona Development when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for demographic and psychographic patterns. Find segmentation studies, user interviews, and survey data. Search for distinct usage patterns: power vs. casual, technical vs. non-technical. Look for behavioral clustering evidence that supports distinct archetypes.

Report synthesis hint

Define 2-3 primary personas. For each: (1) Name + archetype label; (2) Goals and motivations; (3) Pain points; (4) Behaviors and habits; (5) Context — tools used, media consumed; (6) Representative quote. Identify the primary persona and the rationale for prioritizing them.

Questions this framework helps answer
What decision becomes easier after applying Persona Development?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
KANO Model
kano-model

Classify features by satisfaction impact: must-haves, performance drivers, and delighters.

Plain-English explanation

In plain English: Classify features by satisfaction impact: must-haves, performance drivers, and delighters.

When to use it

Use KANO Model when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for feature requests, complaints about missing functionality, and delight stories. Find reviews that mention specific features positively or negatively. Look for what users take for granted vs. what surprises them. Find competitive feature differentiators.

Report synthesis hint

Classify findings into three KANO categories: (1) Must-Be (Basic) — absence causes dissatisfaction, presence is expected; (2) Performance (Linear) — more is better, directly correlates to satisfaction; (3) Attractive (Delighter) — unexpected features that create delight. Provide prioritization recommendation based on KANO positioning.

Questions this framework helps answer
What decision becomes easier after applying KANO Model?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Contextual Inquiry
contextual-inquiry

Observe users in their natural environment to uncover unarticulated needs and workarounds.

Plain-English explanation

In plain English: Observe users in their natural environment to uncover unarticulated needs and workarounds.

When to use it

Use Contextual Inquiry when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for documented user behavior studies, usability test reports, and field research write-ups. Find examples of workarounds users have created. Search for edge cases and non-obvious use patterns, the environment of use, and tools used in parallel.

Report synthesis hint

Present contextual observations: (1) Work environment and context factors; (2) Observed tasks vs. official workflow; (3) Workarounds and improvised solutions; (4) Breakdowns and interruptions; (5) Artifacts and tools in use. Highlight the gap between designed behavior and actual practice.

Questions this framework helps answer
What decision becomes easier after applying Contextual Inquiry?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
User Story Mapping
user-story-mapping

Map user activities and tasks to build a shared understanding of what to build first.

Plain-English explanation

In plain English: Map user activities and tasks to build a shared understanding of what to build first.

When to use it

Use User Story Mapping when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for the key activities users perform, the tasks within each activity, and how they sequence them. Find minimum viable flow evidence: what users need to accomplish their primary goal end-to-end. Search for persona-specific paths through the same high-level activities.

Report synthesis hint

Structure as a story map: Activities (horizontal backbone) → Tasks (cards per activity) → Detail (depth per task). Identify the walking skeleton — the minimum path that delivers end-to-end value. Group tasks by priority: MVP / Next / Later. Note where different personas diverge in their paths.

Questions this framework helps answer
What decision becomes easier after applying User Story Mapping?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Affinity Mapping
affinity-mapping

Cluster qualitative data to reveal emergent themes and patterns across research findings.

Plain-English explanation

In plain English: Cluster qualitative data to reveal emergent themes and patterns across research findings.

When to use it

Use Affinity Mapping when your research decision depends on user research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search broadly and collect qualitative evidence: user quotes, forum threads, reviews, case studies, and expert opinions. Prioritize diverse sources to surface varied perspectives. Collect atomic data points that can be independently evaluated.

Report synthesis hint

Group findings into clusters of related ideas without imposing a predetermined structure. For each cluster: (1) Name the theme in the user's voice; (2) List 3-5 supporting evidence points; (3) Note frequency and strength. Show clustering hierarchy: observations → themes → meta-themes. Identify the 3 most significant emergent patterns.

Questions this framework helps answer
What decision becomes easier after applying Affinity Mapping?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category

Prioritization

Category setup
Plain-language read

Use these when you have too many options and need a clear rank order.

Technical effect

Transforms synthesis into scoring, tradeoffs, and decision ordering.

Best use case

Best for roadmap and scope decisions.

RICE Scoring
rice-scoring

Score initiatives by Reach, Impact, Confidence, and Effort to prioritize objectively.

Plain-English explanation

In plain English: Score initiatives by Reach, Impact, Confidence, and Effort to prioritize objectively.

When to use it

Use RICE Scoring when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for quantitative evidence informing RICE signals: user population size (Reach), problem severity and frequency (Impact), evidence quality (Confidence), and implementation complexity benchmarks (Effort). Find comparable case studies to calibrate estimates.

Report synthesis hint

Structure findings to directly inform RICE scoring: (1) Reach — estimated users affected with source; (2) Impact — problem severity (low/medium/high/massive) with evidence; (3) Confidence — evidence quality score with caveats; (4) Effort — comparable implementation data. Produce a ranked shortlist with RICE scores and key assumptions.

Questions this framework helps answer
What decision becomes easier after applying RICE Scoring?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
ICE Scoring
ice-scoring

Quick prioritization using Impact, Confidence, and Ease — ideal for early-stage decisions.

Plain-English explanation

In plain English: Quick prioritization using Impact, Confidence, and Ease — ideal for early-stage decisions.

When to use it

Use ICE Scoring when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for evidence of potential impact (market size, pain intensity), confidence signals (validated assumptions, comparable case studies), and ease of implementation (technical complexity, resource requirements). Focus on benchmarks from similar products.

Report synthesis hint

For each candidate provide: (1) Impact evidence (1-10) with rationale; (2) Confidence evidence (1-10) — what is and is not validated; (3) Ease estimate (1-10) — complexity and dependencies. Produce a ranked list by ICE score (I×C×E). Flag top assumptions that would reorder the list if wrong.

Questions this framework helps answer
What decision becomes easier after applying ICE Scoring?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
MoSCoW Method
moscow-method

Categorize requirements as Must-have, Should-have, Could-have, or Won't-have this release.

Plain-English explanation

In plain English: Categorize requirements as Must-have, Should-have, Could-have, or Won't-have this release.

When to use it

Use MoSCoW Method when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for requirements that are non-negotiable, important but not critical, nice-to-have enhancements, and explicitly out-of-scope items. Find user expectation evidence: what users consider table stakes vs. bonus features. Look for competitive parity requirements.

Report synthesis hint

Present findings in four MoSCoW categories: (1) Must-Have — evidence that absence causes failure; (2) Should-Have — evidence of value but degraded without; (3) Could-Have — delight features with low core-value dependency; (4) Won't-Have Now — explicitly deprioritized with rationale. Justify each categorization with evidence.

Questions this framework helps answer
What decision becomes easier after applying MoSCoW Method?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Impact vs Effort Matrix
impact-effort-matrix

Plot initiatives on a 2×2 to identify quick wins and deprioritize hard low-value work.

Plain-English explanation

In plain English: Plot initiatives on a 2×2 to identify quick wins and deprioritize hard low-value work.

When to use it

Use Impact vs Effort Matrix when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for evidence on two axes per candidate: potential impact (user value, revenue, risk reduction) and implementation effort (time, complexity, dependencies). Find industry benchmarks for similar features. Look for case studies showing impact after implementation.

Report synthesis hint

Map findings to four quadrants: (1) Quick Wins — high impact, low effort; (2) Major Projects — high impact, high effort; (3) Fill-Ins — low impact, low effort; (4) Thankless Tasks — low impact, high effort. For each quadrant, list items with supporting evidence. Recommend what to do first and what to deprioritize.

Questions this framework helps answer
What decision becomes easier after applying Impact vs Effort Matrix?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Opportunity Scoring
opportunity-scoring

Find underserved outcomes where importance is high but current satisfaction is low.

Plain-English explanation

In plain English: Find underserved outcomes where importance is high but current satisfaction is low.

When to use it

Use Opportunity Scoring when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for what outcomes users care most about (importance signals) and how well current solutions deliver on those outcomes (satisfaction signals). Find complaints indicating high importance + low satisfaction gaps. Search for NPS data, user surveys, and satisfaction metrics.

Report synthesis hint

Apply the Ulwick formula: Opportunity = Importance + max(Importance − Satisfaction, 0). Present a table ranked by opportunity score: (1) Outcome; (2) Importance evidence; (3) Satisfaction evidence; (4) Score; (5) Current solution gap. Highlight the top 5 underserved outcomes as prime innovation targets.

Questions this framework helps answer
What decision becomes easier after applying Opportunity Scoring?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Weighted Scoring
weighted-scoring

Score options against multiple weighted criteria to reflect actual business priorities.

Plain-English explanation

In plain English: Score options against multiple weighted criteria to reflect actual business priorities.

When to use it

Use Weighted Scoring when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for evidence against common scoring criteria: user value, strategic alignment, technical feasibility, revenue potential, competitive differentiation, and risk. Find data to support scoring each option on each criterion. Look for industry benchmarks.

Report synthesis hint

Present a weighted scoring matrix with suggested weights: Strategic Fit 25%, User Value 25%, Feasibility 20%, Revenue 20%, Risk 10%. For each option, provide evidence-based scores per criterion and total weighted score. Add sensitivity analysis: which weight change most reorders the ranking.

Questions this framework helps answer
What decision becomes easier after applying Weighted Scoring?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Value vs Complexity
value-vs-complexity

Simple 2×2 that separates high-value simple wins from complex low-value investments.

Plain-English explanation

In plain English: Simple 2×2 that separates high-value simple wins from complex low-value investments.

When to use it

Use Value vs Complexity when your research decision depends on prioritization behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for value signals (demand frequency, revenue potential, competitive necessity) and complexity signals (engineering effort, dependencies, regulatory burden, maintenance cost). Find cases where teams over-invested in complex low-value work.

Report synthesis hint

Map to a 2×2: (1) High Value/Low Complexity — do first; (2) High Value/High Complexity — plan carefully; (3) Low Value/Low Complexity — opportunistic; (4) Low Value/High Complexity — avoid. List examples per quadrant with evidence. Add a "complexity debt" note: which items could be simplified before execution.

Questions this framework helps answer
What decision becomes easier after applying Value vs Complexity?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category

Systems Thinking

Category setup
Plain-language read

Use these when the issue is systemic and not solved by one feature tweak.

Technical effect

Emphasizes loops, dependencies, incentives, and second-order effects.

Best use case

Best for ecosystem and organizational complexity.

Causal Loop Diagram
causal-loop

Map reinforcing and balancing feedback loops to understand systemic dynamics.

Plain-English explanation

In plain English: Map reinforcing and balancing feedback loops to understand systemic dynamics.

When to use it

Use Causal Loop Diagram when your research decision depends on systems thinking behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for causal relationships in the system: what factors drive each other, what creates feedback loops, and what delays exist between cause and effect. Find historical data showing reinforcing growth cycles or balancing corrections. Look for unintended consequences of past interventions.

Report synthesis hint

Map the system in text: (1) Reinforcing loops (R) — virtuous or vicious cycles; (2) Balancing loops (B) — self-correcting mechanisms; (3) Key leverage points — variables where small changes have outsized effects; (4) Time delays that obscure cause-effect. Highlight which loops currently dominate behavior.

Questions this framework helps answer
What decision becomes easier after applying Causal Loop Diagram?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Iceberg Model
iceberg-model

Look beyond events to patterns, structures, and mental models driving outcomes.

Plain-English explanation

In plain English: Look beyond events to patterns, structures, and mental models driving outcomes.

When to use it

Use Iceberg Model when your research decision depends on systems thinking behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for patterns and trends behind observable events. Find structural causes: policies, incentives, resource flows, and power dynamics creating the patterns. Search for the mental models and beliefs sustaining the structures. Look for historical attempts to change symptoms without addressing root structure.

Report synthesis hint

Present findings at four iceberg levels: (1) Events — visible/observable right now; (2) Patterns — recurring trends over time; (3) Structures — systems, incentives, and flows creating the patterns; (4) Mental Models — beliefs sustaining the structures. Emphasize that interventions at the structure and mental model levels are most leveraged.

Questions this framework helps answer
What decision becomes easier after applying Iceberg Model?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
PESTLE Analysis
pestle

Scan macro-environment forces: Political, Economic, Social, Technological, Legal, Environmental.

Plain-English explanation

In plain English: Scan macro-environment forces: Political, Economic, Social, Technological, Legal, Environmental.

When to use it

Use PESTLE Analysis when your research decision depends on systems thinking behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search across six dimensions: Political (regulations, policy, geopolitical risk), Economic (market conditions, funding climate), Social (demographic shifts, cultural trends), Technological (emerging tech, disruption vectors), Legal (compliance, IP landscape), Environmental (sustainability pressure, climate risk). Find both threats and opportunities in each.

Report synthesis hint

Structure as a PESTLE matrix with one section per dimension. For each: (1) Top 2-3 forces; (2) Evidence and trend direction; (3) Impact magnitude (low/medium/high); (4) Timeline (near-term vs. long-term). Conclude with a cross-cutting synthesis: which forces are most interconnected and how they amplify each other.

Questions this framework helps answer
What decision becomes easier after applying PESTLE Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Stakeholder Mapping
stakeholder-mapping

Identify and prioritize stakeholders by influence and interest to shape engagement strategy.

Plain-English explanation

In plain English: Identify and prioritize stakeholders by influence and interest to shape engagement strategy.

When to use it

Use Stakeholder Mapping when your research decision depends on systems thinking behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for all parties affecting or influenced by the topic: direct users, decision-makers, influencers, regulators, and opponents. Find evidence of each stakeholder's goals, concerns, and influence level. Look for existing coalition dynamics and power structures.

Report synthesis hint

Map stakeholders on two axes: Power/Influence (high-low) and Interest/Alignment (high-low). Four quadrants: (1) Manage closely — high power, high interest; (2) Keep satisfied — high power, low interest; (3) Keep informed — low power, high interest; (4) Monitor — low power, low interest. For each key stakeholder: goal, concern, engagement strategy.

Questions this framework helps answer
What decision becomes easier after applying Stakeholder Mapping?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Force Field Analysis
force-field

Identify driving forces for change and restraining forces against it to design interventions.

Plain-English explanation

In plain English: Identify driving forces for change and restraining forces against it to design interventions.

When to use it

Use Force Field Analysis when your research decision depends on systems thinking behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for forces driving adoption/change (tech enablers, market demand, competitive pressure, regulatory push) and forces resisting change (inertia, switching costs, vested interests, technical barriers, cultural resistance). Find evidence of magnitude for each force.

Report synthesis hint

Present two columns: Driving Forces (for change) and Restraining Forces (against change). For each force: (1) Description; (2) Evidence and strength (1-5). Calculate net force direction. Strategy: strengthen the top 2 driving forces AND/OR weaken the top 2 restraining forces — with specific tactics for each.

Questions this framework helps answer
What decision becomes easier after applying Force Field Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Systems Archetypes
systems-archetypes

Recognize common systemic behavior patterns to predict dynamics and avoid traps.

Plain-English explanation

In plain English: Recognize common systemic behavior patterns to predict dynamics and avoid traps.

When to use it

Use Systems Archetypes when your research decision depends on systems thinking behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for systemic patterns: Fixes That Fail (short-term fix creates long-term problems), Shifting the Burden, Limits to Growth, Tragedy of the Commons, Escalation (arms races). Find historical examples matching the pattern. Look for leading indicators suggesting which archetype is at play.

Report synthesis hint

Identify which system archetype(s) best describe the dynamics. For each archetype found: (1) Name the archetype; (2) Map specific variables to the structure; (3) Predict trajectory if uninterrupted; (4) Identify the high-leverage intervention that breaks the pattern. Flag co-occurring archetypes that interact.

Questions this framework helps answer
What decision becomes easier after applying Systems Archetypes?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category

Strategy

Category setup
Plain-language read

Use these when positioning, competition, and durable advantage matter.

Technical effect

Shifts analysis toward market structure and strategic leverage.

Best use case

Best for GTM and competitive analysis.

Blue Ocean Strategy
blue-ocean

Create uncontested market space by eliminating, reducing, raising, and creating value factors.

Plain-English explanation

In plain English: Create uncontested market space by eliminating, reducing, raising, and creating value factors.

When to use it

Use Blue Ocean Strategy when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for: value factors the industry competes on that customers do not actually value (eliminate/reduce candidates); value factors customers want but the industry underdelivers (raise candidates); value factors no current solution offers (create candidates). Find evidence of non-customers and why they do not buy any existing solution.

Report synthesis hint

Apply the ERRC grid: (1) Eliminate — factors to remove with evidence they add cost but not value; (2) Reduce — factors to lower below industry standard; (3) Raise — factors to lift above standard with demand evidence; (4) Create — factors never offered with latent need evidence. Conclude with a Strategy Canvas showing before/after value curve.

Questions this framework helps answer
What decision becomes easier after applying Blue Ocean Strategy?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Porter's Five Forces
porters-five-forces

Analyze competitive intensity through five structural forces that shape industry profitability.

Plain-English explanation

In plain English: Analyze competitive intensity through five structural forces that shape industry profitability.

When to use it

Use Porter's Five Forces when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for evidence of each force: supplier concentration and switching costs, buyer bargaining leverage, barriers to entry and new entrant activity, substitute products and adoption rates, current competitor intensity and differentiation strategies. Find market structure data: concentration ratios, margins.

Report synthesis hint

Score each force (Low/Medium/High intensity): (1) Threat of New Entrants — barriers evidence; (2) Bargaining Power of Suppliers; (3) Bargaining Power of Buyers; (4) Threat of Substitutes including non-consumption; (5) Rivalry Among Competitors. Overall attractiveness assessment and key strategic implication per force.

Questions this framework helps answer
What decision becomes easier after applying Porter's Five Forces?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Value Chain Analysis
value-chain

Map primary and support activities to identify where value is created and where to optimize.

Plain-English explanation

In plain English: Map primary and support activities to identify where value is created and where to optimize.

When to use it

Use Value Chain Analysis when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for key activities in creating and delivering value: inbound logistics, operations, outbound, marketing, and service (primary). Support activities: procurement, technology, HR, infrastructure. Find where margins are highest, costs are concentrated, and competitors differentiate.

Report synthesis hint

Map the value chain with Primary Activities (Inbound → Operations → Outbound → Marketing → Service) and Support Activities (Infrastructure, HR, Technology, Procurement). For each activity: (1) Key cost drivers; (2) Differentiation potential; (3) Competitive benchmark. Identify the 2-3 activities offering the greatest leverage for competitive advantage.

Questions this framework helps answer
What decision becomes easier after applying Value Chain Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Competitive Moats
competitive-moats

Identify sustainable competitive advantages: network effects, switching costs, cost advantages, intangibles.

Plain-English explanation

In plain English: Identify sustainable competitive advantages: network effects, switching costs, cost advantages, intangibles.

When to use it

Use Competitive Moats when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for evidence of durable advantages: network effects (value growing with users), switching costs (what keeps customers locked in), cost advantages (scale economics, proprietary processes), intangible assets (brand loyalty, IP, regulatory licenses). Find evidence of moat erosion and competitive attempts to replicate.

Report synthesis hint

Assess each moat type: (1) Network Effects — value-increases-with-scale evidence; (2) Switching Costs — evidence users stay despite alternatives; (3) Cost Advantages — structural cost leadership evidence; (4) Intangible Assets — brand premium or IP protection evidence; (5) Efficient Scale — natural monopoly dynamics. Rate each moat: wide / narrow / none. Identify the primary moat and its durability.

Questions this framework helps answer
What decision becomes easier after applying Competitive Moats?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Platform Strategy
platform-strategy

Design multi-sided platform dynamics: producers, consumers, core interaction, and network effects.

Plain-English explanation

In plain English: Design multi-sided platform dynamics: producers, consumers, core interaction, and network effects.

When to use it

Use Platform Strategy when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for two-sided or multi-sided market dynamics: producer side, consumer side, and the core value exchange. Find evidence of same-side and cross-side network effects. Search for platform governance decisions, monetization models, and chicken-and-egg bootstrapping strategies.

Report synthesis hint

Map the platform: (1) Sides — producers and consumers; (2) Core Interaction — primary value exchange; (3) Network Effects — same-side and cross-side dynamics; (4) Monetization — where value is captured; (5) Governance — curation and quality control. Identify the key bootstrapping challenge and how successful platforms solved it.

Questions this framework helps answer
What decision becomes easier after applying Platform Strategy?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
First Principles Thinking
first-principles

Break down assumptions to fundamental truths, then reason up to novel solutions.

Plain-English explanation

In plain English: Break down assumptions to fundamental truths, then reason up to novel solutions.

When to use it

Use First Principles Thinking when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for the fundamental physical, economic, or behavioral constraints governing the topic. Find evidence challenging common assumptions: what has always been done a certain way and is it actually necessary. Search for cases where someone violated industry conventions and succeeded.

Report synthesis hint

Structure as a first principles decomposition: (1) Identify the convention being challenged; (2) Decompose to fundamental constraints — what is physically or economically immutable; (3) Identify which assumptions are convention, not constraint; (4) Rebuild from fundamentals — what would the optimal solution look like without convention; (5) Map the gap between current state and the first-principles ideal.

Questions this framework helps answer
What decision becomes easier after applying First Principles Thinking?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category Design
category-design

Create and dominate a new market category rather than competing in an existing one.

Plain-English explanation

In plain English: Create and dominate a new market category rather than competing in an existing one.

When to use it

Use Category Design when your research decision depends on strategy behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for conditions suggesting a new category is emerging: problems existing categories do not address, terminology shifts, new buyer profiles, and enabling technology changes. Find examples of successful category creation. Search for problems currently addressed by a patchwork of workarounds.

Report synthesis hint

Assess category design potential: (1) Category Problem — new problem existing categories do not solve; (2) Category Solution — the new way framed as a category not a feature; (3) Category King potential — is a dominant player emerging; (4) Ecosystem adoption — partners, investors, press; (5) Conditioning — how to educate the market. Recommend whether to compete in or create a category.

Questions this framework helps answer
What decision becomes easier after applying Category Design?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category

Validation

Category setup
Plain-language read

Use these when you need to test quickly before major investment.

Technical effect

Focuses output on experiments, proof signals, and falsifiable assumptions.

Best use case

Best for MVP and pre-build risk reduction.

Pretotype Testing
pretotype-testing

Test the riskiest assumption at lowest cost before building anything real.

Plain-English explanation

In plain English: Test the riskiest assumption at lowest cost before building anything real.

When to use it

Use Pretotype Testing when your research decision depends on validation behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for the fastest validation methods in similar contexts: landing page tests, fake door tests, manual simulations, and prototype user reactions. Find case studies of products validated with minimal investment. Search for demand signals: waitlists, pre-orders, community formation around the problem.

Report synthesis hint

Identify the riskiest assumption that must be true for the product to succeed. For each key assumption: (1) The assumption stated; (2) A pretotype design to test it; (3) Success metric — what signal validates it; (4) Cost and timeline estimate. Prioritize by risk: which assumption, if false, would be most fatal.

Questions this framework helps answer
What decision becomes easier after applying Pretotype Testing?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Smoke Test / Fake Door
fake-door-test

Measure demand by advertising a feature that does not exist yet and tracking intent signals.

Plain-English explanation

In plain English: Measure demand by advertising a feature that does not exist yet and tracking intent signals.

When to use it

Use Smoke Test / Fake Door when your research decision depends on validation behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for fake door test examples and benchmark conversion rates in comparable markets. Find which messaging and value propositions resonate most with the target segment. Look for existing landing pages or ads testing similar offers and their reported results.

Report synthesis hint

Design a smoke test proposal: (1) Hypothesis — what demand signal validates the opportunity; (2) Test design — specific CTA or landing page; (3) Traffic source and targeting; (4) Success benchmark — the conversion rate that justifies building; (5) Interpretation guide — what different results mean for the go/no-go decision. Reference comparable test benchmarks from research.

Questions this framework helps answer
What decision becomes easier after applying Smoke Test / Fake Door?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Wizard of Oz MVP
wizard-of-oz

Simulate automated behavior with manual human effort to validate without building automation.

Plain-English explanation

In plain English: Simulate automated behavior with manual human effort to validate without building automation.

When to use it

Use Wizard of Oz MVP when your research decision depends on validation behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for Wizard of Oz MVP examples — services that appeared automated but were manually fulfilled. Find evidence of the target interaction frequency and latency tolerance. Search for operational benchmarks: what manual fulfillment costs per transaction at the target volume.

Report synthesis hint

Design the Wizard of Oz test: (1) The automated capability to simulate; (2) The manual process behind the curtain; (3) User-facing interface design; (4) Operational burden estimate per interaction; (5) Key metrics: usage, satisfaction, and demand signal; (6) Break-even point between manual and automated. Flag what to learn that justifies the operational cost.

Questions this framework helps answer
What decision becomes easier after applying Wizard of Oz MVP?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Concierge MVP
concierge-mvp

Manually deliver the value proposition to a small group before building any product.

Plain-English explanation

In plain English: Manually deliver the value proposition to a small group before building any product.

When to use it

Use Concierge MVP when your research decision depends on validation behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for concierge MVP examples in comparable domains. Find what white-glove service delivery looked like and the unit economics: cost vs. value delivered. Look for what teams learned from high-touch delivery that they would not have learned from a product alone.

Report synthesis hint

Design the concierge MVP: (1) Target segment — who to serve manually first; (2) Service scope — what will be done by hand; (3) Success metrics — results that must be achieved for the user; (4) Learning goals — what validated insights does this provide; (5) Unit economics — cost of service vs. willingness to pay; (6) Automation roadmap — which steps to automate first based on frequency.

Questions this framework helps answer
What decision becomes easier after applying Concierge MVP?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
A/B Testing Framework
ab-testing

Design controlled experiments to test hypotheses with statistical rigor.

Plain-English explanation

In plain English: Design controlled experiments to test hypotheses with statistical rigor.

When to use it

Use A/B Testing Framework when your research decision depends on validation behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for industry benchmarks for the metric being tested. Find comparable A/B test case studies and their effect sizes. Search for confounding variables that contaminate results. Look for minimum detectable effect benchmarks in similar product contexts.

Report synthesis hint

Design the test: (1) Hypothesis — specific change and expected effect direction; (2) Primary metric determining success; (3) Guardrail metrics to monitor for regressions; (4) Sample size for 80% power at the minimum detectable effect; (5) Duration — accounting for novelty effect and weekly cycles; (6) Pre-committed decision criteria. Reference benchmark conversion rates from research.

Questions this framework helps answer
What decision becomes easier after applying A/B Testing Framework?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
North Star Metric
north-star-metric

Identify the single metric that best captures the core value delivered to users.

Plain-English explanation

In plain English: Identify the single metric that best captures the core value delivered to users.

When to use it

Use North Star Metric when your research decision depends on validation behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for how successful companies in this space define their North Star Metric. Find examples of companies that over-optimized on proxies (DAU, revenue) and lost product-market fit. Search for what behavior most strongly correlates with retention and expansion.

Report synthesis hint

Evaluate candidate North Star Metrics on three criteria: (1) Does it reflect real value to the user, not just business value? (2) Is it a leading indicator of long-term retention? (3) Can the full team influence it? Recommend one North Star Metric with rationale, plus 3-5 input metrics that drive it. Flag anti-patterns: metrics that look good but hide unhealthy dynamics.

Questions this framework helps answer
What decision becomes easier after applying North Star Metric?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Category

AI/Deep Research

Category setup
Plain-language read

Use these when ambiguity is high and you need stronger reasoning depth.

Technical effect

Changes reasoning shape: decomposition, critique, scenario analysis, adversarial checks.

Best use case

Best for strategic uncertainty and nuanced questions.

Chain-of-Thought Research
chain-of-thought

Break the research question into explicit sub-questions and answer each before synthesizing.

Plain-English explanation

In plain English: Break the research question into explicit sub-questions and answer each before synthesizing.

When to use it

Use Chain-of-Thought Research when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Decompose the main research question into 4-6 sub-questions that must be answered to fully address it. Search each sub-question independently with targeted queries. For each, look for both confirming and disconfirming evidence. Prioritize primary sources.

Report synthesis hint

Present research as an explicit reasoning chain: (1) Restate the main question; (2) For each sub-question: the question, evidence, and provisional answer; (3) Show how sub-answers combine into the final answer; (4) State confidence level for each step; (5) Flag the weakest link — the step with least evidence. Conclude with the answer and the reasoning path.

Questions this framework helps answer
What decision becomes easier after applying Chain-of-Thought Research?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Multi-Perspective Analysis
multi-perspective

Analyze the question through diverse stakeholder lenses to surface blind spots.

Plain-English explanation

In plain English: Analyze the question through diverse stakeholder lenses to surface blind spots.

When to use it

Use Multi-Perspective Analysis when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for perspectives from diverse stakeholders: end users, operators, regulators, investors, critics, and domain experts. Find arguments both for and against the dominant view. Search for contrarian analyses. Look for how different industries or geographies approach the same question differently.

Report synthesis hint

Present distinct perspectives: (1) Label each (User, Investor, Regulator, Critic, etc.); (2) Key claims and supporting evidence; (3) What each perspective identifies as the core problem; (4) Where perspectives conflict and why. Conclude with a synthesis: what view emerges when all perspectives are weighted, and which perspective is most underrepresented in mainstream analysis.

Questions this framework helps answer
What decision becomes easier after applying Multi-Perspective Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Red Team Analysis
red-team

Stress-test assumptions and plans by systematically arguing the opposing case.

Plain-English explanation

In plain English: Stress-test assumptions and plans by systematically arguing the opposing case.

When to use it

Use Red Team Analysis when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for failure cases, counterarguments, and disconfirming evidence. Find critiques from domain experts opposing the mainstream view. Search for historical precedents where the optimistic case failed. Look for second-order effects, unintended consequences, and edge cases that break the primary thesis.

Report synthesis hint

Lead with the strongest version of the opposing argument — steelman the countercase fully. For each major claim in the primary thesis: (1) Most credible counterargument; (2) Supporting evidence; (3) Strength rating (strong/medium/weak). Conclude with: which assumptions are most fragile, what evidence would change the thesis, and which risk mitigations matter most.

Questions this framework helps answer
What decision becomes easier after applying Red Team Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Scenario Planning
scenario-planning

Map multiple plausible futures to stress-test strategy robustness across different outcomes.

Plain-English explanation

In plain English: Map multiple plausible futures to stress-test strategy robustness across different outcomes.

When to use it

Use Scenario Planning when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for key uncertainties that will most determine the future: technological change vectors, regulatory directions, economic cycles, and behavioral shifts. Find historical scenario planning analyses for comparable markets. Search for leading indicators signaling which future is materializing.

Report synthesis hint

Define 3-4 scenarios: (1) Name and narrative; (2) Key assumptions and uncertainties differentiating each; (3) Probability estimate with rationale; (4) Implications for the topic in each scenario. Identify decisions that are robust across all scenarios (no-regret moves) vs. scenario-specific bets. Conclude with the most likely scenario and highest-priority strategic response.

Questions this framework helps answer
What decision becomes easier after applying Scenario Planning?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Analogical Reasoning
analogical-reasoning

Find structural analogies from other domains and extract transferable insights.

Plain-English explanation

In plain English: Find structural analogies from other domains and extract transferable insights.

When to use it

Use Analogical Reasoning when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for analogous problems in different industries sharing structural similarities: same dynamics, constraints, or stakeholder relationships. Find how analogous problems were solved and what made those solutions work. Look for "what industry is this the X of?" framings. Search for both successful analogies and cases where analogies misled.

Report synthesis hint

Present 3-5 structural analogies: (1) The analogous domain and problem; (2) How the structure maps to the current problem and where the analogy breaks; (3) How the analogous problem was solved; (4) Transferable insight for the current context; (5) Key differences limiting applicability. Conclude with the most actionable insight from the strongest analogy.

Questions this framework helps answer
What decision becomes easier after applying Analogical Reasoning?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Socratic Method
socratic-method

Use systematic questioning to expose assumptions, contradictions, and deeper truths.

Plain-English explanation

In plain English: Use systematic questioning to expose assumptions, contradictions, and deeper truths.

When to use it

Use Socratic Method when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for the deepest assumptions underlying the conventional wisdom about the topic. Find evidence challenging each assumption. Search for questions that most experts avoid. Look for contradictions between stated beliefs and actual behavior in the field. Find the unanswered questions that would most change the field if answered.

Report synthesis hint

Structure as a sequence of questions and answers that progressively deepen understanding: (1) Surface question and common answer; (2) Evidence that challenges the common answer; (3) Deeper question revealed; continue for 4-5 levels. Conclude with the deepest unresolved question — the one that, if answered, would most change how the topic is understood. Flag which assumptions remain untested.

Questions this framework helps answer
What decision becomes easier after applying Socratic Method?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?
Pre-Mortem Analysis
pre-mortem

Imagine the project has already failed and work backwards to identify risks and failure modes.

Plain-English explanation

In plain English: Imagine the project has already failed and work backwards to identify risks and failure modes.

When to use it

Use Pre-Mortem Analysis when your research decision depends on ai/deep research behavior rather than a generic summary.

Setup flow
1. Brief + objective
2. Query plan bias
3. Synthesis structure
Developer implementation details
Search planning hint

Search for failure modes and common reasons projects fail in this domain. Find post-mortems and failure case studies. Search for warning signs and leading indicators of failure. Look for risks that others in similar situations systematically ignored until it was too late.

Report synthesis hint

Structure as a pre-mortem: (1) State the failure scenario — the project has failed; (2) For each likely failure cause: evidence it could happen, early warning signs, and preventive action; (3) Rank failure modes by likelihood × impact; (4) Identify the single most probable cause of failure; (5) Prescribe the 3 most important early interventions. Lead with the hardest-to-see risk, not the most obvious one.

Questions this framework helps answer
What decision becomes easier after applying Pre-Mortem Analysis?
What evidence should be weighted highest under this lens?
What would this framework likely deprioritize?