The Foundations of Cost-Sensitive Learning

By Charles Elkan et al
Read the original document by opening this link in a new tab.

Table of Contents

Abstract
1. Making decisions based on a cost matrix
1.1 Cost matrix properties
1.2 Costs versus benefits
1.3 Making optimal decisions
2 Achieving cost-sensitivity by rebalancing
3 New probabilities given a new base rate
4 Effects of changing base rates
4.1 Changing base rates and Bayesian learning
4.2 Decision tree growing
4.3 Decision tree pruning
5 Conclusions

Summary

This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. The authors characterize precisely but intuitively when a cost matrix is reasonable, avoid defining an economically incoherent cost matrix, and provide insights on making optimal decisions. The study explores the behavior of Bayesian and decision tree learning methods, concluding that changing the balance of training examples has little effect on learned classifiers. The recommended approach involves using a classifier learned from the training set and applying cost-sensitive decision-making methodologies. The paper also discusses the importance of obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers.
×
This is where the content will go.