InterviewStack.io LogoInterviewStack.io

A and B Test Design Questions

Designing and running A and B tests and split tests to evaluate product and feature changes. Candidates should be able to form clear null and alternative hypotheses, select appropriate primary metrics and guardrail metrics that reflect both product goals and user safety, choose randomization and assignment strategies, and calculate sample size and test duration using power analysis and minimum detectable effect reasoning. They should understand applied statistical analysis concepts including p values confidence intervals one tailed and two tailed tests sequential monitoring and stopping rules and corrections for multiple comparisons. Practical abilities include diagnosing inconclusive or noisy experiments detecting and mitigating common biases such as peeking selection bias novelty effects seasonality instrumentation errors and network interference and deciding when experiments are appropriate versus alternative evaluation methods. Senior candidates should reason about trade offs between speed and statistical rigor plan safe rollouts and ramping define rollback plans and communicate uncertainty and business implications to technical and non technical stakeholders. For developer facing products candidates should also consider constraints such as small populations cross team effects ethical concerns and special instrumentation needs.

EasyTechnical
0 practiced
List the primary statistical assumptions underlying common A/B test analyses (two-sample t-test, z-test for proportions, chi-square tests). For each assumption describe a concrete check an analyst can run in BI (query, visualization, or statistical test) and an alternative analysis strategy if the assumption is violated.
EasyTechnical
0 practiced
You must build a Tableau (or Power BI) dashboard for live experiment monitoring for product managers. Describe six essential elements or panels this dashboard should include (e.g., effect size, CI, sample size over time), explain the purpose of each, and name two pitfalls to avoid in dashboard design that commonly mislead business stakeholders.
HardTechnical
0 practiced
List ethical issues that can arise from running product experiments (privacy, deception, fairness, addiction, harm) and design a governance policy for experimentation. The policy should include approval flows, pre-registration, sensitive-experiment review board, data minimization rules, consent requirements, and examples of experiments that should be blocked or escalated.
EasyTechnical
0 practiced
You will run an experiment that defers non-critical scripts to speed page load. Identify four guardrail metrics you would monitor other than conversion, justify why each matters, and propose an actionable threshold or rule for when the experiment should be paused for investigation.
MediumTechnical
0 practiced
Design a canary and ramp plan for a feature that reduces page load time but may break older browsers. Propose a staged rollout percentages and durations, list metrics to check at each stage, and define automated stop/rollback rules. Explain reasoning for choose percent steps and minimum observation windows.

Unlock Full Question Bank

Get access to hundreds of A and B Test Design interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.