InterviewStack.io LogoInterviewStack.io

A and B Test Design Questions

Designing and running A and B tests and split tests to evaluate product and feature changes. Candidates should be able to form clear null and alternative hypotheses, select appropriate primary metrics and guardrail metrics that reflect both product goals and user safety, choose randomization and assignment strategies, and calculate sample size and test duration using power analysis and minimum detectable effect reasoning. They should understand applied statistical analysis concepts including p values confidence intervals one tailed and two tailed tests sequential monitoring and stopping rules and corrections for multiple comparisons. Practical abilities include diagnosing inconclusive or noisy experiments detecting and mitigating common biases such as peeking selection bias novelty effects seasonality instrumentation errors and network interference and deciding when experiments are appropriate versus alternative evaluation methods. Senior candidates should reason about trade offs between speed and statistical rigor plan safe rollouts and ramping define rollback plans and communicate uncertainty and business implications to technical and non technical stakeholders. For developer facing products candidates should also consider constraints such as small populations cross team effects ethical concerns and special instrumentation needs.

EasyTechnical
0 practiced
Compare simple randomization and stratified randomization for assigning users to experiment variants. For a feature with known demographic differences (e.g., mobile vs desktop, region), recommend an assignment strategy, explain how you'd implement stratification practically, and discuss trade-offs such as complexity, variance reduction, and operational risk.
MediumTechnical
0 practiced
You're rolling out a personalization algorithm across countries with different baselines and seasonality. Describe segmentation strategy, how you would choose test duration and holiday exclusions, whether to pool results or run per-country tests, and how to balance power with interpretability for global stakeholders.
EasyTechnical
0 practiced
In the context of product experiments, explain type I and type II errors, their business consequences (give one example each), and one practical approach a PM can use to reduce each error type. Discuss the trade-offs of reducing one error type with respect to the other.
EasyTechnical
0 practiced
Describe criteria you would use to decide whether to run an A/B test versus doing an observational study, user interviews, or a pilot rollout. Provide two concrete product scenarios where an A/B test would be inappropriate, and recommend the alternative evaluation method for each scenario with a brief justification.
MediumTechnical
0 practiced
An experiment shows an early +8% lift in engagement for Variant B during week 1 but after two weeks the effect fades and the final result is flat. As the PM, describe the diagnostic steps you would run: data integrity checks, segment analyses, novelty effect checks, day-of-week seasonality, sample size/power review, and whether to extend the test or retire the variant.

Unlock Full Question Bank

Get access to hundreds of A and B Test Design interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.