InterviewStack.io LogoInterviewStack.io

A and B Test Design Questions

Designing and running A and B tests and split tests to evaluate product and feature changes. Candidates should be able to form clear null and alternative hypotheses, select appropriate primary metrics and guardrail metrics that reflect both product goals and user safety, choose randomization and assignment strategies, and calculate sample size and test duration using power analysis and minimum detectable effect reasoning. They should understand applied statistical analysis concepts including p values confidence intervals one tailed and two tailed tests sequential monitoring and stopping rules and corrections for multiple comparisons. Practical abilities include diagnosing inconclusive or noisy experiments detecting and mitigating common biases such as peeking selection bias novelty effects seasonality instrumentation errors and network interference and deciding when experiments are appropriate versus alternative evaluation methods. Senior candidates should reason about trade offs between speed and statistical rigor plan safe rollouts and ramping define rollback plans and communicate uncertainty and business implications to technical and non technical stakeholders. For developer facing products candidates should also consider constraints such as small populations cross team effects ethical concerns and special instrumentation needs.

EasyTechnical
0 practiced
Compare simple (Bernoulli) randomization and stratified randomization for running an A/B test. Describe an example where you would stratify by country or device, how stratification changes variance or power, and practical implementation considerations in a production assignment service.
HardSystem Design
0 practiced
Multiple teams run experiments concurrently. Propose technical and policy controls to minimize cross-experiment interference and allow valid inference: central experiment registry, traffic caps, layer-based assignment, experiment overlap constraints, and orthogonalization strategies. Discuss the impact of these controls on power and how to communicate trade-offs to product teams.
EasyTechnical
0 practiced
List the primary statistical assumptions underlying common A/B test analyses (two-sample t-test, z-test for proportions, chi-square tests). For each assumption describe a concrete check an analyst can run in BI (query, visualization, or statistical test) and an alternative analysis strategy if the assumption is violated.
EasyTechnical
0 practiced
Explain the purpose of an A/A test and what you expect to observe. Suppose you ran an A/A test and observed a 7% difference in conversion with p=0.04. List possible explanations for this surprising result and outline next steps you would take as the BI analyst.
MediumTechnical
0 practiced
An experiment that product expected to increase purchase conversion is inconclusive. Provide a concrete diagnostic checklist for the BI analyst to determine whether inconclusiveness is due to: insufficient power, high variance, instrumentation errors, contamination from other experiments, novelty effects, or genuine null effect. For each item, name specific data queries or visualizations you'd run.

Unlock Full Question Bank

Get access to hundreds of A and B Test Design interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.