Classification

Computer says no

February 20, 2017 — September 13, 2024

classification
metrics
statistics
Figure 1

Distinguishing whether a thing was generated from distribution A or B. This is a learning-theory, loss-minimization framing; We might also consider probability distributions of categories.

Mostly this page is a list of classification target losses to assess classifiers.

1 Classification loss zoo

Surprisingly subtle. ROC, AUC, precision/recall, confusion…

One of the less abstruse summaries of these is the scikit-learn classifier loss page, which includes both formulae and verbal descriptions. The Pirates guide to various scores provides an easy introduction.

1.1 Expected cost

But actually, do we need most of this zoo of metrics? Couldn’t we just use expected cost? That usually provides a more principled way to compare classifiers, via decision theory. See Ferrer (2023), Dyrland, Lundervold, and Mana (2023b) and Suzuki (2022). (why is this so recent?) TBC

1.2 Matthews correlation coefficient

Due to Matthews (1975). This is the first choice for seamlessly handling multi-label problems within a single algorithm since its behaviour is reasonable for 2 class or multi class, balanced or unbalanced, and it’s computationally cheap. Unless you have a vastly different importance for your classes, this is a good default.

However, it is not differentiable with respect to classification certainties, so you can’t directly use it as, e.g., a target loss in neural nets; Therefore you use surrogate measures which are differentiable and intermittently check that it actually helps the MCC.

I tell ya what, though, it looks like it could be made differentiable via a relaxation, and variationally distributional if we interpreted it in a likelihood context. Hmm.

1.2.1 2-class case

Take your \(2 \times 2\) confusion matrix of true positive, false positives etc.

\[ {\text{MCC}}={\frac {TP\times TN-FP\times FN}{{\sqrt {(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}}} \]

\[ |{\text{MCC}}|={\sqrt {{\frac {\chi ^{2}}{n}}}} \]

1.2.2 Multiclass case

Take your \(K \times K\) confusion matrix \(C\), then

\[ {\displaystyle {\text{MCC}}={\frac {\sum _{k}\sum _{l}\sum _{m}C_{kk}C_{lm}-C_{kl}C_{mk}}{{\sqrt {\sum _{k}(\sum _{l}C_{kl})(\sum _{k'|k'\neq k}\sum _{l'}C_{k'l'})}}{\sqrt {\sum _{k}(\sum _{l}C_{lk})(\sum _{k'|k'\neq k}\sum _{l'}C_{l'k'})}}}}} \]

1.3 ROC/AUC

Receiver Operator Characteristic/Area Under Curve. Supposedly dates back to radar operators in WWII. The graph of the false versus true positive rate as the criterion changes. Matthews (1975) talk about the AUC for radiology; Supposedly Spackman (1989) introduced it to machine learning, but I haven’t read the article in question. Allows us to trade off the importance of false positive/false negatives.

1.4 Cross entropy

I’d better write down an explicit form for this, since most ML toolkits are curiously shy about giving it even though it’s the default.

Let \(x\) be the estimated probability and \(z\) be the supervised class label. Then the binary cross entropy loss is

\[ \ell(x,z) = -z\log(x) - (1-z)\log(1-x) \]

If \(y=\operatorname{logit}(x)\) is not a probability but a logit, then the numerically stable version is

\[ \ell(y,z) = \max\{y,0\} - y + \log(1+\exp(-|x|)) \]

1.5 F-measures

🏗

2 Use of softmax

TBC.

3 Gumbel-max

See Gumbel-max tricks.

4 Pólya-Gamma augmentation

See Pólya-Gamma.

5 Unbalanced class problems

🏗

6 Analog Bits

Fuchi et al. (2023):

The one-hot vector has long been widely used in machine learning as a simple and generic method for representing discrete data. However, this method increases the number of dimensions linearly with the categorical data to be represented, which is problematic from the viewpoint of spatial computational complexity in deep learning, which requires a large amount of data. Recently, Analog Bits (Chen, Zhang, and Hinton 2022), a method for representing discrete data as a sequence of bits, was proposed on the basis of the high expressiveness of diffusion models. However, since the number of category types to be represented in a generation task is not necessarily at a power of two, there is a discrepancy between the range that Analog Bits can represent and the range represented as category data. If such a value is generated, the problem is that the original category value cannot be restored. To address this issue, we propose Residual Bit Vector (ResBit), which is a hierarchical bit representation. Although it is a general-purpose representation method, in this paper, we treat it as numerical data and show that it can be used as an extension of Analog Bits using Table Residual Bit Diffusion (TRBD), which is incorporated into TabDDPM, a tabular data generation method. We experimentally confirmed that TRBD can generate diverse and high-quality data from small-scale table data to table data containing diverse category values faster than TabDDPM. Furthermore, we show that ResBit can also serve as an alternative to the one-hot vector by utilizing ResBit for conditioning in GANs and as a label expression in image classification.

7 Hierarchical Multi-class classifiers

Read et al. (2021) discusses how to create multi-class classifiers by stacking layers of binary classifiers and using each as a feature input to the next, which is an elegant solution IMO.

8 Philosophical connection to semantics

Since semantics is what humans call classifiers.

9 Connection to legibility

I do think there is something interesting happening with legibility. States need to classify, apparently. Adversarial classification is my point of entry into that concept..

10 References

Arya, Schauer, Schäfer, et al. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Baldi, Brunak, Chauvin, et al. 2000. Assessing the Accuracy of Prediction Algorithms for Classification: An Overview.” Bioinformatics.
Brodersen, Ong, Stephan, et al. 2010. The Balanced Accuracy and Its Posterior Distribution.” In Proceedings of the 2010 20th International Conference on Pattern Recognition. ICPR ’10.
Chen, Zhang, and Hinton. 2022. Analog Bits: Generating Discrete Data Using Diffusion Models with Self-Conditioning.” In.
Che, Zhang, Sohl-Dickstein, et al. 2020. Your GAN Is Secretly an Energy-Based Model and You Should Use Discriminator Driven Latent Sampling.” arXiv:2003.06060 [Cs, Stat].
Dyrland, Lundervold, and Mana. 2023a. Does the Evaluation Stand up to Evaluation? A First-Principle Approach to the Evaluation of Classifiers.”
———. 2023b. Don’t Guess What’s True: Choose What’s Optimal. A Probability Transducer for Machine-Learning Classifiers.”
Ferrer. 2023. Analysis and Comparison of Classification Metrics.”
Flach, Hernández-Orallo, and Ferri. 2011. A Coherent Interpretation of AUC as a Measure of Aggregated Classification Performance.” In Proceedings of the 28th International Conference on Machine Learning (ICML-11).
Fuchi, Zanashir, Minami, et al. 2023. ResBit: Residual Bit Vector for Categorical Values.”
Gneiting, and Raftery. 2007. Strictly Proper Scoring Rules, Prediction, and Estimation.” Journal of the American Statistical Association.
Gorodkin. 2004. Comparing two K-category assignments by a K-category correlation coefficient.” Computational Biology and Chemistry.
Gozli. 2023. Principles of Categorization: A Synthesis.” Seeds of Science.
Grathwohl, Swersky, Hashemi, et al. 2021. Oops I Took A Gradient: Scalable Sampling for Discrete Distributions.”
Hand. 2009. Measuring Classifier Performance: A Coherent Alternative to the Area Under the ROC Curve.” Machine Learning.
Hanley, and McNeil. 1983. A Method of Comparing the Areas Under Receiver Operating Characteristic Curves Derived from the Same Cases. Radiology.
Huang, Li, Macheret, et al. 2020. A Tutorial on Calibration Measurements and Calibration Models for Clinical Prediction Models.” Journal of the American Medical Informatics Association : JAMIA.
Jung, Hero III, Mara, et al. 2016. Semi-Supervised Learning via Sparse Label Propagation.” arXiv:1612.01414 [Cs, Stat].
Kim, Ramdas, Singh, et al. 2021. Classification Accuracy as a Proxy for Two-Sample Testing.” The Annals of Statistics.
Lobo, Jiménez-Valverde, and Real. 2008. AUC: A Misleading Measure of the Performance of Predictive Distribution Models.” Global Ecology and Biogeography.
Matthews. 1975. Comparison of the Predicted and Observed Secondary Structure of T4 Phage Lysozyme.” Biochimica Et Biophysica Acta (BBA) - Protein Structure.
Menon, and Williamson. 2016. Bipartite Ranking: A Risk-Theoretic Perspective.” Journal of Machine Learning Research.
No Need for Ad-Hoc Substitutes: The Expected Cost Is a Principled All-Purpose Classification Metric.” 2024. Transactions on Machine Learning Research.
Nock, Menon, and Ong. 2016. A Scaled Bregman Theorem with Applications.” arXiv:1607.00360 [Cs, Stat].
Polson, Scott, and Windle. 2013. Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables.” Journal of the American Statistical Association.
Powers. 2007. Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness and Correlation.”
Provost, and Fawcett. 2001. Robust Classification for Imprecise Environments.” Machine Learning.
Read, Pfahringer, Holmes, et al. 2021. Classifier Chains: A Review and Perspectives.” Journal of Artificial Intelligence Research.
Reid, and Williamson. 2011. Information, Divergence and Risk for Binary Experiments.” Journal of Machine Learning Research.
Spackman. 1989. Signal Detection Theory: Valuable Tools for Evaluating Inductive Learning.” In Proceedings of the Sixth International Workshop on Machine Learning.
Suzuki. 2022. Policy Implications of Statistical Estimates: A General Bayesian Decision-Theoretic Model for Binary Outcomes.” Statistics and Public Policy.
Tiao, Bonilla, and Ramos. 2018. Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference.”
Tiao, Klein, Seeger, et al. 2021. BORE: Bayesian Optimization by Density-Ratio Estimation.” In Proceedings of the 38th International Conference on Machine Learning.
van den Goorbergh, van Smeden, Timmerman, et al. 2022. The Harm of Class Imbalance Corrections for Risk Prediction Models: Illustration and Simulation Using Logistic Regression.” Journal of the American Medical Informatics Association.