InterviewStack.io LogoInterviewStack.io

A and B Test Design Questions

Designing and running A and B tests and split tests to evaluate product and feature changes. Candidates should be able to form clear null and alternative hypotheses, select appropriate primary metrics and guardrail metrics that reflect both product goals and user safety, choose randomization and assignment strategies, and calculate sample size and test duration using power analysis and minimum detectable effect reasoning. They should understand applied statistical analysis concepts including p values confidence intervals one tailed and two tailed tests sequential monitoring and stopping rules and corrections for multiple comparisons. Practical abilities include diagnosing inconclusive or noisy experiments detecting and mitigating common biases such as peeking selection bias novelty effects seasonality instrumentation errors and network interference and deciding when experiments are appropriate versus alternative evaluation methods. Senior candidates should reason about trade offs between speed and statistical rigor plan safe rollouts and ramping define rollback plans and communicate uncertainty and business implications to technical and non technical stakeholders. For developer facing products candidates should also consider constraints such as small populations cross team effects ethical concerns and special instrumentation needs.

MediumTechnical
0 practiced
Your analytics team runs 5 similar A/B tests concurrently on the same product area. What statistical issues arise from multiple simultaneous experiments? Compare and contrast family-wise error rate correction (e.g., Bonferroni) with false discovery rate control (e.g., Benjamini-Hochberg) for this setting, including impact on power and typical use cases.
HardTechnical
0 practiced
Design an experiment results pipeline that supports real-time monitoring, historical aggregations, and reproducible post-hoc analysis. Describe the data sources, ingestion and transformation steps, storage models (e.g., event logs and aggregated tables), key tables/views for analysts, and how to enable rerunning analyses for auditing and reproducibility.
MediumTechnical
0 practiced
Explain the novelty effect (initial surge in engagement when users first see a change) and how it may bias early A/B test results. Propose detection strategies and analysis techniques to distinguish short-lived novelty from sustained treatment effects, including cohort and time-based analyses.
HardTechnical
0 practiced
You ran an experiment that produced wide confidence intervals and high metric variance, yielding an inconclusive result. Create a structured diagnostic checklist to find root causes across data, metric engineering, randomization, and segmentation. For each likely cause, propose remediation steps and how you would validate the fix.
HardTechnical
0 practiced
A short-term experiment increases click volume but preliminary data shows decreased long-term retention after 30 days. Propose an evaluation plan to measure both short-term and long-term impacts prior to shipping, including experiment length, cohort tracking, metrics to capture lifetime value, and statistical analyses to ensure long-term business value is preserved.

Unlock Full Question Bank

Get access to hundreds of A and B Test Design interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.