Read the original document by opening this link in a new tab.
Table of Contents
Why?
Binary classifiers
Metrics
Confusion Matrix
Point metrics: Accuracy, Precision, Recall / Sensitivity, Specificity, F-score
Summary metrics: AU-ROC, AU-PRC, Log-loss
Choosing Metrics
Class Imbalance
Multi-class
Why are metrics important?
Binary Classification
Score based models
Point metrics: Confusion Matrix
Point metrics: True Positives
Point metrics: True Negatives
Point metrics: False Positives
Point metrics: False Negatives
Point metrics: Accuracy
Point metrics: Precision
Point metrics: Positive Recall (Sensitivity)
Point metrics: Negative Recall (Specificity)
Point metrics: F score
Point metrics: Changing threshold
Threshold
Threshold Scanning
Summary metrics: ROC (rotated version)
Summary metrics: PRC
Summary metrics: Log-Loss motivation
Summary metrics: Log-Loss
Calibration
Unsupervised Learning
Class Imbalance: Problems
Class Imbalance: Metrics (pathological cases)
Multi-class (few remarks)
Choosing Metrics
Thank You!
Summary
Topics covered in this presentation include the importance of evaluation metrics in machine learning, focusing on binary classifiers and various metrics such as Accuracy, Precision, Recall, AU-ROC, AU-PRC, and Log-loss. The presentation discusses the significance of metrics in capturing real-world business goals, organizing team efforts, and measuring progress over time. It also delves into the challenges of class imbalance and strategies for choosing appropriate metrics based on specific objectives. The document provides insights into binary classification models, confusion matrices, point metrics, summary metrics like ROC and PRC, as well as the impact of threshold selection on model performance. Additionally, it touches upon unsupervised learning, class imbalance issues, and considerations for multi-class scenarios. The presentation concludes with a discussion on calibration techniques and the importance of selecting metrics tailored to the desired outcomes.