InterviewStack.io LogoInterviewStack.io

Machine Learning Algorithms and Theory Questions

Core supervised and unsupervised machine learning algorithms and the theoretical principles that guide their selection and use. Covers linear regression, logistic regression, decision trees, random forests, gradient boosting, support vector machines, k means clustering, hierarchical clustering, principal component analysis, and anomaly detection. Topics include model selection, bias variance trade off, regularization, overfitting and underfitting, ensemble methods and why they reduce variance, computational complexity and scaling considerations, interpretability versus predictive power, common hyperparameters and tuning strategies, and practical guidance on when each algorithm is appropriate given data size, feature types, noise, and explainability requirements.

EasyTechnical
0 practiced
Why is diversity among base learners important for ensemble performance? Give mechanisms to increase diversity (data sampling, feature subspace sampling, different model families) and describe metrics that quantify ensemble diversity. How does diversity relate to ensemble error decomposition?
MediumTechnical
0 practiced
Explain why bagging (e.g., random forest) tends to reduce variance whereas boosting (e.g., gradient boosting) focuses on reducing bias. Provide a concise mathematical intuition and discuss practical implications in production, including latency, model complexity, and risk of overfitting.
HardSystem Design
0 practiced
Design a feature store that supports both online low-latency serving and batch training with strong consistency guarantees to avoid training/serving skew. Requirements: 1M feature writes/day, 10k online reads/sec, versioning and lineage, TTLs, batch materialization, and CI for feature changes. Outline architecture, storage choices, APIs, and data quality controls.
MediumTechnical
0 practiced
Explain the kernel trick used by support vector machines (SVM). Compare linear and RBF kernels: what effects do they have on decision boundaries? When would you choose an SVM over logistic regression in practice? Discuss computational and memory complexity of kernel SVMs and strategies to scale them to large datasets.
HardTechnical
0 practiced
Implement a simplified gradient boosting regressor from scratch in Python using decision stumps as base learners. Support `n_estimators`, `learning_rate`, `subsample` (row sampling), and squared error loss. Provide `fit(X, y)` and `predict(X)` methods and document any numerical stability considerations.

Unlock Full Question Bank

Get access to hundreds of Machine Learning Algorithms and Theory interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.