121 MLQA – Study Guide Edition Reading List
This page is unlisted, and intended for customers of 121 Essential Machine Learning Questions - Study Guide Edition.
Chapter 1 – The Big Picture
- The Bias-Variance Tradeoff
- Bayesian Prior and Posterior Distributions
- Conjugate Priors
- Supervised vs. Unsupervised Learning
- Parametric vs. Unparametric Models
- What is a Generative Model?
- Generative vs. Discriminative Models
- Curse of Dimensionality
- End-to-End Predictive Modeling Tutorial
- No Free Lunch Theorem
Chapter 2 – Optimization
- Gradient Descent
- Gradient Descent vs. Stochastic Gradient Descent
- How to Select a Loss Function
- Intuition Behind Log-Loss Function
- Types of Optimization Algorithms
Chapter 3 – Data Preprocessing
- Box-Cox Power Transformation
- Statistical Dimensionality Reduction Techniques
- Is Multicollinearity Really a Problem?
- One-Hot vs Label Encoding
- One-Hot Encoding for Machine Learning
Chapter 4 – Sampling & Splitting
- Train/Test vs Train/Validation/Test Split
- How to Divide a Dataset into Training and Validation
- Cross Validation in Plain English
- Why Bootstrapping Works
- Class Imbalance in Supervised Machine Learning
Chapter 5 – Supervised Learning
- Advantages of Different Classification Algorithms
- What’s Random in a Random Forest?
- Regularization in Machine Learning
- L1 vs. L2 Regularization
- Ways to Make a Predictive Model More Robust to Outliers
- When it’s Better to Include Fewer Features
- Naive Bayes Intuition
- Logistic Regression vs. Decision Trees
- How a Decision Tree Decides on Splits
- How Does an SVM Work
Chapter 6 – Unsupervised Learning
- Factor vs. Cluster Analysis
- Principle Component Analysis (PCA)?
- PCA or Normalization First?
- Independent Component Analysis (ICA)
- K-Means Algorithm
- Latent Dirichlet Allocation (LDA)
- K-Means vs. Hierarchical Clustering
Chapter 7 – Model Evaluation
- Precision vs. Recall
- Type 1 vs. Type 2 Error
- More Data vs. Better Algorithms
- Area Under ROC Curve (AUC)
- AUC vs. Standard Accuracy
- Classifier Calibration in Machine Learning
- Identifying Underfitting
- Why Correlation Doesn’t Imply Causation
- Correlation vs. Covariance
Chapter 8 – Ensemble Learning
- Introduction to Ensemble Learning
- How Do Ensembles Work?
- Bagging vs. Boosting vs. Stacking
- Definition of Weak Learner
- How Does Boosting Work?