# Clustering

May 23, 2015 — June 7, 2016

Getting a bunch of data points and approximating them (in some sense) by their membership (possibly fuzzy) in some groups, or regions of feature space. Quantization, in other words.

For certain definitions this can be the same thing as non-negative and/or low rank matrix factorisations if you use mixture models, and is only really different in *emphasis* from dimensionality reduction. If you start with a list of features then think about “distances” between observations you have just implicitly intuced a weighted graph from your hitherto non-graphy data and are now looking at a networks problem.

If you care about clustering as such, spectral clustering feels like a nice entry point, maybe via Chris Ding’s tutorial on spectral clustering.

`CONCOR`

induces a cute similarity measure.`MCL`

: Markov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs (also known as networks) based on simulation of (stochastic) flow in graphs.

There are many useful tricks in here, e.g. Belkin and Niyogi (2003) shows how to use a graph Laplacian (possibly a contrived or arbitrary one) to construct “natural” Euclidean coordinates for your data, such that nodes that have much traffic between them in the Laplacian representation have a small Euclidean distance (The “Urban Traffic Planner Fantasy Transformation”) Quickly gives you a similarity measure on non-Euclidean data. Questions: Under which metrics is it equivalent to multidimensional scaling? Is it worthwhile going the other way and constructing density estimates from induced flow graphs?

## 1 Clustering as matrix factorisation

If I know me, I might be looking at this page trying remember which papers situate k-means-type clustering in matrix factorisation literature.

The single-serve paper doing that is Bauckhage (2015), but there are broader versions (Singh and Gordon 2008; Türkmen 2015), some computer science connections in Mixon, Villar, and Ward (2016), and an older one in Zass and Shashua (2005).

Further things I might discuss here are the graph-flow/Laplacian notions of clustering and the density/centroids approach. I will discuss that under mixture models

## 2 References

*arXiv:1507.05910 [Cs, Stat]*.

*Journal of Machine Learning Research*.

*arXiv:0808.0163 [Cs]*.

*arXiv:1512.07548 [Stat]*.

*Neural Computation*.

*Physical Review E*.

*Proceedings of the 2005 SIAM International Conference on Data Mining*. Proceedings.

*Proceedings of the National Academy of Sciences*.

*Bioinformatics*.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*.

*Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing*. STOC ’11.

*arXiv:1507.00280 [Cs, Math, Stat]*.

*Proceedings of the 16th International Conference on Neural Information Processing Systems*. NIPS’03.

*2013 European Conference on Mobile Robots (ECMR)*.

*Knowledge and Information Systems*.

*arXiv:1612.03281 [Cond-Mat, Physics:physics]*.

*arXiv:1602.06612 [Cs, Math, Stat]*.

*The Annals of Applied Statistics*.

*The European Physical Journal B - Condensed Matter and Complex Systems*.

*SIAM Journal on Optimization*.

*arXiv:1608.07597 [Stat]*.

*arXiv:1612.06470 [Cs, Stat]*.

*Computer Science Review*.

*Mustererkennung 1998*. Informatik Aktuell.

*Journal of Machine Learning Research*.

*Machine Learning and Knowledge Discovery in Databases*.

*Proceedings of the National Academy of Sciences of the United States of America*.

*SIAM Journal on Computing*.

*Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing*. STOC ’04.

*arXiv:0809.3232 [Cs]*.

*Cognitive Science*.

*arXiv:1612.03450 [Cs, Math, Stat]*.

*arXiv:1507.03194 [Cs, Stat]*.

*Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining*. KDD ’09.

*Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1 - Volume 01*. ICCV ’05.

*Seventh IEEE International Conference on Data Mining, 2007. ICDM 2007*.