InterviewStack.io LogoInterviewStack.io

Technical Tools and Stack Proficiency Questions

Assessment of a candidates practical proficiency across the technology stack and tools relevant to their role. This includes the ability to list and explain hands on experience with programming languages, frameworks, libraries, cloud platforms, data and machine learning tooling, analytics and visualization tools, and design and prototyping software. Candidates should demonstrate depth not just familiarity by describing specific problems they solved with each tool, trade offs between alternatives, integration points, deployment and operational considerations, and examples of end to end workflows. The description covers developer and data scientist stacks such as Python and C plus plus, machine learning frameworks like TensorFlow and PyTorch, cloud providers such as Amazon Web Services, Google Cloud Platform and Microsoft Azure, as well as design tools and research tools such as Figma and Adobe Creative Suite. Interviewers may probe for evidence of hands on tasks, configuration and troubleshooting, performance or cost trade offs, versioning and collaboration practices, and how the candidate keeps skills current.

EasyTechnical
48 practiced
List three experiment tracking tools you have used, for example MLflow, Weights & Biases, Neptune. For each tool describe a concrete problem it helped solve in production, one integration step that was required, and one limitation you encountered.
MediumTechnical
49 practiced
Explain how you profile GPU memory usage during a long PyTorch training run. Include commands and tools such as nvidia-smi, torch.cuda.memory_summary, and torch.autograd.profiler, and describe actions you would take based on findings like reducing batch size, using gradient accumulation, or enabling mixed precision.
HardSystem Design
61 practiced
Architect a global, multi-region ML serving system that provides low-latency inference under 100 ms for users worldwide. Consider model replication, feature store locality, consistency and freshness of features, model update rollouts, A/B testing, and cost trade-offs between centralized and localized serving.
HardTechnical
59 practiced
A model's accuracy dropped significantly when moved from dev to prod. Training used GPUs while production inference uses CPUs. Create a systematic debugging plan covering data sampling, preprocessing parity, numeric precision differences, serialization formats, and performance benchmarks you would run to isolate the issue.
MediumTechnical
63 practiced
An Airflow DAG that retrains a model daily has intermittent failures due to S3 connection errors. Outline how you would diagnose the root cause and harden the DAG: include retries and backoff strategy, idempotency, alerting, and how to test the changes before rolling out.

Unlock Full Question Bank

Get access to hundreds of Technical Tools and Stack Proficiency interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.