InterviewStack.io LogoInterviewStack.io

A and B Test Design Questions

Designing and running A and B tests and split tests to evaluate product and feature changes. Candidates should be able to form clear null and alternative hypotheses, select appropriate primary metrics and guardrail metrics that reflect both product goals and user safety, choose randomization and assignment strategies, and calculate sample size and test duration using power analysis and minimum detectable effect reasoning. They should understand applied statistical analysis concepts including p values confidence intervals one tailed and two tailed tests sequential monitoring and stopping rules and corrections for multiple comparisons. Practical abilities include diagnosing inconclusive or noisy experiments detecting and mitigating common biases such as peeking selection bias novelty effects seasonality instrumentation errors and network interference and deciding when experiments are appropriate versus alternative evaluation methods. Senior candidates should reason about trade offs between speed and statistical rigor plan safe rollouts and ramping define rollback plans and communicate uncertainty and business implications to technical and non technical stakeholders. For developer facing products candidates should also consider constraints such as small populations cross team effects ethical concerns and special instrumentation needs.

EasyTechnical
0 practiced
In the context of product experiments, explain type I and type II errors, their business consequences (give one example each), and one practical approach a PM can use to reduce each error type. Discuss the trade-offs of reducing one error type with respect to the other.
EasyTechnical
0 practiced
Explain the difference between statistical significance and practical significance with a product example where a 0.5% lift is statistically significant due to a very large sample size but not business-meaningful. How would you present this distinction and recommended next steps to the executive team?
HardTechnical
0 practiced
You manage a developer-facing product used by small teams. A proposed feature could expose sensitive logs in a way that might raise ethical and privacy concerns. Design a safe experiment plan that addresses small population size, cross-team interference, specialized instrumentation needs (traces/log sampling), privacy/ethics checks, and concrete rollback triggers. Include trade-offs you must accept.
EasyTechnical
0 practiced
List common biases and artifacts that can make experiments noisy or misleading (examples: peeking/early stopping, novelty effects, seasonality, selection bias, instrumentation drift). For each listed bias, give one practical detection technique and one mitigation strategy a PM would use before or during the experiment.
MediumTechnical
0 practiced
During an experiment you discover conversion events dropped by ~30% in one region due to a JavaScript release. As the PM, outline immediate remediation actions (technical and communications), how you'd decide whether to abort the experiment vs salvage data, and steps to rebuild confidence in experiment instrumentation going forward.

Unlock Full Question Bank

Get access to hundreds of A and B Test Design interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.