InterviewStack.io LogoInterviewStack.io

Technology Stack Knowledge Questions

Assess a candidate's practical and conceptual understanding of technology stacks, including major programming languages, application frameworks, databases, infrastructure, and supporting tools. Candidates should be able to explain common use cases and trade offs for languages such as Python, Java, Go, Rust, C plus plus, and JavaScript, including differences between compiled and interpreted languages, static and dynamic type systems, and performance characteristics. They should discuss application frameworks and libraries for frontend and backend development, common web stacks, service architectures such as monoliths and microservices, and application programming interfaces. Evaluate understanding of data storage options and trade offs between relational and non relational databases and the role of structured query language. Candidates should be familiar with cloud platforms such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure, infrastructure components including containerization and orchestration tools such as Docker and Kubernetes, and development workflows including version control, continuous integration and continuous delivery pipelines, testing frameworks, automation, and infrastructure as code. Assess operational concerns such as logging, monitoring and observability, deployment strategies, scalability, reliability, fault tolerance, security considerations, and common failure modes and mitigations. Interviewers may probe both awareness of specific tools and the candidate's depth of hands on experience, ability to justify technology choices by evaluating trade offs, constraints, and risk, and willingness and ability to learn and evaluate new technologies rather than claiming mastery of everything.

HardTechnical
69 practiced
Discuss strategies to optimize a transformer based model to meet a strict 1 millisecond p95 inference target on CPU only edge devices. Cover algorithmic approaches such as distillation and pruning, deployment techniques like operator fusion and model compilation with TVM or ONNX Runtime, and quantization trade offs including per channel quantization.
MediumTechnical
88 practiced
Design a structured logging schema for inference requests that supports debugging, auditing, and analysis. Include fields such as correlation id, request id, model version, timestamp, input summary or hash, latency breakdown for each stage, and error codes. Discuss how to avoid logging PII and strategies for sampling and redaction.
HardSystem Design
79 practiced
Design an experiment tracking and provenance system that records data versions, code commits, environment snapshots, hyperparameters, metrics, and model artifacts. Decide whether to extend an existing tool such as MLflow or build a custom solution for a company with strict audit requirements, and describe integration points with CI CD and a model registry.
EasyTechnical
68 practiced
Explain Kubernetes core concepts relevant to ML workloads: pod, deployment, daemonset, statefulset, service, ingress, namespace, and node affinity. Describe when you would use Kubernetes to serve ML models and what concerns are unique for GPU workloads.
HardSystem Design
65 practiced
Design an end to end MLOps architecture to support daily training of many models on petabyte scale datasets. Include components for data ingestion, feature store, distributed training cluster autoscaling, experiment tracking, model registry, artifact storage, serving, monitoring, and rollback, and specify which cloud managed or open source tools you would use and why.

Unlock Full Question Bank

Get access to hundreds of Technology Stack Knowledge interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.