Privacy-Preserving Experiment Design Questions
Techniques and considerations for designing experiments and data collection strategies that protect privacy. Covers methods such as differential privacy, secure aggregation, federated learning, synthetic data, data minimization, consent management, de-identification, and privacy risk assessment, with emphasis on maintaining data utility and regulatory compliance while enabling robust experimentation.
HardSystem Design
0 practiced
Propose an algorithm and implementation plan for large-scale hyperparameter search under a strict per-user privacy budget. Consider approaches such as aggregator-level privatized validation, public proxy datasets, transfer learning to reduce tuning cost, multi-fidelity search, and budget-aware early stopping. Explain how you will account for privacy spend per tuning run and orchestration to minimize total epsilon consumption across many models.
EasyTechnical
0 practiced
Explain differential privacy in the context of machine learning experiments. Define epsilon and delta, give an intuitive interpretation for a product manager, and describe how changing epsilon impacts privacy and model utility. Provide a concise example showing two neighboring datasets and how DP bounds the difference in outputs to motivate epsilon choices for experiments.
MediumTechnical
0 practiced
In Python, implement a simple Rényi Differential Privacy (RDP) accountant that computes the cumulative RDP for T compositions of a Gaussian mechanism without subsampling, and then converts the RDP to an (epsilon, delta) pair for a target delta. Function signature: compute_epsilon_rdp(sigma, steps, delta). Explain numeric stability choices and validate on a small example.
MediumSystem Design
0 practiced
Design a privacy budget management system for a company where multiple product teams run experiments on shared user populations. System must enforce per-user epsilon per rolling window, allow teams to request and reserve budgets, provide immutable audit logs, and prevent accidental overspend. Describe APIs, enforcement mechanisms, user-facing controls, and conflict resolution policies between competing experiments.
MediumTechnical
0 practiced
Describe step-by-step how to integrate DP-SGD into an existing PyTorch training pipeline using Opacus (or equivalent). Include modifications to the dataloader for privacy sampling, how Opacus handles per-sample gradients and clipping, tuning noise multiplier and clipping thresholds, and how to report and monitor cumulative epsilon during training.
Unlock Full Question Bank
Get access to hundreds of Privacy-Preserving Experiment Design interview questions and detailed answers.
Sign in to ContinueJoin thousands of developers preparing for their dream job.