InterviewStack.io LogoInterviewStack.io

Hands On Projects and Problem Solving Questions

Discussion of practical projects and side work you have built or contributed to across domains. Candidates should be prepared to explain their role, architecture and design decisions, services and libraries chosen, alternatives considered, trade offs made, challenges encountered, debugging and troubleshooting approaches, performance optimization, testing strategies, and lessons learned. This includes independent side projects, security labs and capture the flag practice, bug bounty work, coursework projects, and other hands on exercises. Interviewers may probe for how you identified requirements, prioritized tasks, collaborated with others, measured impact, and what you would do differently in hindsight.

EasyTechnical
0 practiced
You must deliver a model to production. Explain how you'd manage Python and system dependencies to ensure reproducible builds across developer machines, CI runners, and production containers. Discuss pip vs conda, lockfiles (pip-compile, conda-lock), OS-level packages, and how you'd surface dependency drift.
EasyTechnical
0 practiced
Design unit and integration tests for a data preprocessing function that normalizes timestamps, imputes missing values, and encodes categorical features. Provide example test cases you would write with pytest in prose: deterministic fixtures, edge cases (all missing, unseen categories), which components to mock, and what to assert in CI to prevent regressions.
MediumTechnical
0 practiced
You need to fine-tune a 350M parameter transformer on a 10k labeled dataset. Describe a practical implementation plan using Hugging Face Transformers and accelerate/PEFT: data preprocessing, batch sizing, gradient accumulation, LR schedule, checkpointing strategy, mixed precision, evaluation frequency, and early-stopping criteria to avoid overfitting.
MediumSystem Design
0 practiced
Given production requirements of 2000 QPS for small NLP models with variable request sizes and occasional batching, compare serving options: FastAPI+Gunicorn, TorchServe, NVIDIA Triton, and KFServing. Evaluate them for latency, dynamic batching support, model lifecycle management, scalability, and operational complexity.
HardTechnical
0 practiced
You have a BERT-base-like model and must achieve p95 inference latency under 10ms on commodity CPUs. Propose a prioritized engineering plan including model distillation, quantization to int8, operator fusion and pruning, conversion to ONNX and using ONNX Runtime, CPU threading and affinity, and benchmarking plan. Explain expected accuracy and latency trade-offs.

Unlock Full Question Bank

Get access to hundreds of Hands On Projects and Problem Solving interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.