Accuracy |
Activation Function |
Active Learning |
Adaboost |
AdaBoost vs. Gradient Boosting vs. XGBoost |
AdaDelta |
Area Under Precision Recall Curve (AUPRC) |
AUC Score |
Averaging in Ensemble Learning |
Backward Feature Elimination |
Bagging |
Bayesian Optimization Hyperparameter Finding |
Bias & Variance |
Binary Cross Entropy |
Binning or Bucketing |
Boosting |
Collinearity |
Confusion Matrix |
Connections - Log Likelihood, Cross Entropy, KL Divergence, Logistic Regression, and Neural Networks |
Cosine Similarity |
Cross Entropy |
Cross Validation |
Curse of Dimensionality |
Data Augmentation |
Data Imputation |
DBScan Clustering |
Decision Boundary |
Decision Tree |
Decision Tree (Classification) |
Decision Tree (Regression) |
Density Sparse Data |
Dependent Variable |
Derivative |
Differentiation |
Dimensionality Reduction |
Dying ReLU |
Elastic Net Regression |
Ensemble Learning |
Entropy |
Entropy and Information Gain |
F-Beta Score |
F1 Score |
False Negative Error |
False Positive Rate |
Feature Engineering |
Feature Extraction |
Feature Selection |
Forward Feature Selection |
Gaussian Distribution |
GBM |
Genetic Algorithm Hyperparameter Finding |
Gini Impurity |
Global Minima |
Gradient |
Gradient Boost (Classification) |
Gradient Boost (Regression) |
Gradient Boosting |
Gradient Descent |
Grid Search Hyperparameter Finding |
Handling Imbalanced Dataset |
Handling Missing Data |
Handling Outliers |
Hierarchical Clustering |
Hinge Loss |
How to Choose Kernel in SVM |
How to combine in Ensemble Learning |
Huber Loss |
Independent Variable |
K Fold Cross Validation |
K-means Clustering |
K-means vs. Hierarchical |
K-nearest Neighbor (KNN) |
Kernel in SVM |
Kernel Regression |
Kernel Trick |
KL Divergence |
L1 or Lasso Regression |
L1 vs. L2 Regression |
L2 or Ridge Regression |
Label Encoding |
Learning Rate Scheduler |
LightGBM |
Linear Regression |
Local Minima |
Log-cosh Loss |
Logistic Regression |
Logistic Regression vs. Neural Network |
Loss vs. Cost |
Machine Learning Algorithm Selection |
Machine Learning vs. Deep Learning |
Majority vote in Ensemble Learning |
Margin in SVM |
Maximal Margin Classifier |
Mean Absolute Error (MAE) |
Mean Absolute Percentage Error (MAPE) |
Mean Squared Error (MSE) |
Mean Squared Logarithmic Error (MSLE) |
Mini Batch SGD |
ML Interview |
ML System Design |
Model Based vs. Instance Based Learning |
Multi Class Cross Entropy |
Multi Label Cross Entropy |
Multi Layer Perceptron |
Multicollinearity |
Multivariable Linear Regression |
Multivariate Linear Regression |
Naive Bayes |
Normalization |
One Class Classification |
One Class Gaussian |
One vs One Multi Class Classification |
One vs Rest or One vs All Multi Class Classification |
Overfitting |
Oversampling |
Parameter vs. Hyperparameter |
PCA vs. Autoencoder |
Perceptron |
Polynomial Regression |
Precision |
Precision Recall Curve (PRC) |
Principal Component Analysis (PCA) |
Pruning in Decision Tree |
PyTorch Loss Functions |
Random Forest |
Recall |
Reinforcement Learning |
ReLU |
ROC Curve |
Root Mean Squared Error (RMSE) |
Root Mean Squared Logarithmic Error (RMSLE) |
Saddle Points |
Semi-supervised Learning |
Sensitivity |
Sigmoid Function |
Simple Linear Regression |
Soft Margin in SVM |
Softmax |
Specificity |
Splitting tree in Decision Tree |
Stacking or Meta Model in Ensemble Learning |
Standardization |
Standardization or Normalization |
Stochastic Gradient Descent (SGD) |
Stump |
Supervised Learning |
Support Vector |
Support Vector Machine (SVM) |
Surprise |
SVC |
Swallow vs. Deep Learning |
TF-IDF |
Time Complexity of ML Algos |
Time Complexity of ML Models |
True Negative Rate |
True Positive Rate |
Type 1 Error vs. Type 2 Error |
Undersampling |
Unsupervised Learning |
XGBoost |