Research Methodology Selection and Tradeoffs Questions
Covers how to choose, justify, and execute research and analysis methods given research questions, stakeholder needs, and real world constraints such as limited time, budget, or access to users. Candidates should be able to compare qualitative methods such as interviews, usability testing, ethnography, and diary studies with quantitative methods such as surveys, analytics, split testing, and controlled experiments, and explain when and how to combine them into mixed methods designs. The topic includes core decision criteria and trade offs including generative versus evaluative goals, depth versus breadth, speed versus rigor, sample size and power considerations, cost versus validity, internal validity versus external generalizability, and short term versus longitudinal designs. Practical skills include aligning methodology to success metrics and business objectives, scoping minimal viable research designs, selecting sampling strategies and proxies, recruitment and instrumentation choices, pilot testing, estimation of sample size for quantitative work, mitigation of bias and threats to validity, documenting limitations and uncertainty, communicating and defending methodological choices to nonresearch stakeholders, and ensuring ethical and privacy safeguards and data quality in constrained or iterative studies.
MediumTechnical
33 practiced
You're designing an A/B test for an onboarding CTA click-through. Baseline conversion is 10%. You want to detect an absolute increase to 15% (10% → 15%) with 80% power and alpha = 0.05. Estimate the required sample size per variant, show the formula or approach you used, and discuss assumptions (variance, continuity, independent observations) that could meaningfully change the estimate.
HardTechnical
43 practiced
You need to convince skeptical executives to fund a six-month longitudinal mixed-method study that will delay some roadmap work. Draft a concise, evidence-based pitch that: quantifies expected ROI or risk mitigation (use realistic assumptions), outlines phased deliverables and pilot options to deliver early value, lists the KPIs that will be impacted, and proposes governance to align research progress with product milestones.
EasyTechnical
40 practiced
What is pilot testing in user research? Describe the primary objectives of running a pilot, then explain how you would run a short pilot for an online survey and a separate pilot for a moderated usability session. Finally, list objective criteria you would use to decide whether to revise the instrument or proceed to the full study.
HardTechnical
30 practiced
Your team wants to use a multi-armed bandit (MAB) to optimize several UI variants in production, but there are constraints on user experience risk and fairness across demographic groups. Describe the experimental design choices, safety constraints (e.g., minimum allocation, hard guardrails), fairness constraints, monitoring and stopping rules, and how you would interpret/communicate bandit-derived recommendations versus a traditional A/B test.
EasyTechnical
31 practiced
Explain the difference between generative and evaluative research in user-centered design. Provide concrete examples of methods typically used for each (e.g., contextual inquiry, diary studies, formative usability testing, summative benchmarking) and describe two real-world situations where one approach is clearly preferable. Finally, give one concrete example of a project that benefits from combining generative and evaluative methods and explain the sequence you would use.
Unlock Full Question Bank
Get access to hundreds of Research Methodology Selection and Tradeoffs interview questions and detailed answers.