InterviewStack.io LogoInterviewStack.io

Performance Engineering and Cost Optimization Questions

Engineering practices and trade offs for meeting performance objectives while controlling operational cost. Topics include setting latency and throughput targets and latency budgets; benchmarking profiling and tuning across application database and infrastructure layers; memory compute serialization and batching optimizations; asynchronous processing and workload shaping; capacity estimation and right sizing for compute and storage to reduce cost; understanding cost drivers in cloud environments including network egress and storage tiering; trade offs between real time and batch processing; and monitoring to detect and prevent performance regressions. Candidates should describe measurement driven approaches to optimization and be able to justify trade offs between cost complexity and user experience.

MediumTechnical
0 practiced
Your product uses a large language model for customer support. Inference cost per user request is high due to long contexts and many tokens generated. Propose engineering strategies to reduce cost: prompt caching, retrieval techniques to shorten context, model routing to smaller models for common queries, quantization, and UI changes to reduce token usage. Prioritize the strategies by likely ROI and explain how you'd validate each.
HardSystem Design
0 practiced
You operate distributed data-parallel training on 128 GPUs but observe significant GPU idle time due to input pipeline and synchronization overhead. Propose a set of solutions to increase utilization and lower cost: gradient accumulation, mixed precision, overlapping compute and communication, larger micro-batches, optimized collective communication (NCCL tuning), and data-local scheduling. Explain trade-offs and measurement methods to validate improvements.
MediumTechnical
0 practiced
Scenario: your organization stores many large daily dataset snapshots and model artifacts, causing monthly storage costs to balloon. Propose an automated storage tiering policy using rules based on age, access frequency, size, and project importance. Outline the migration tools or cloud lifecycle policies you would use, monitoring you would add, and guardrails to ensure reproducibility for experiments.
MediumTechnical
0 practiced
Compare pruning, knowledge distillation, and quantization as techniques to reduce model size. For each technique, explain expected impacts on inference latency, memory footprint, training/inference complexity, and accuracy. Provide guidance on which technique to try first given a strict latency target and limited engineering budget.
HardSystem Design
0 practiced
Architect a global, multi-region model inference platform to handle peak 100k QPS with p99 latency <100ms and an explicit goal to minimize operational cost. Address autoscaling, model placement (regional replicas vs centralized), caching at the edge, reducing cross-region egress, model update rollout, canaries, and hardware choices (CPU vs GPU, instance types). Describe a method for capacity estimation and main cost trade-offs.

Unlock Full Question Bank

Get access to hundreds of Performance Engineering and Cost Optimization interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.