Getting a bunch of data points and approximating them (in some sense) as membership (possibly fuzzy) in some groups, or regions of feature space.
For certain definitions this can be the same thing as non-negative and/or low rank matrix factorisations if you use mixture models, and is only really different in emphasis from dimensionality reduction. If you start with a list of features then think about “distances” between observations you have just implicitly intuced a weighted graph from your hitherto non-graphy data and are now looking at a networks problem.
If you really care about clustering as such, spectral clustering feels least inelegant if not fastest. Here is Chris Ding’s tutorial on spectral clustering
CONCORinduces a cute similarity measure.
MCL: Markov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs (also known as networks) based on simulation of (stochastic) flow in graphs.
There are many useful tricks in here, e.g. Belkin and Niyogi (2003) shows how to use a graph Laplacian (possibly a contrived or arbitrary one) to construct “natural” Euclidean coordinates for your data, such that nodes that have much traffic between them in the Laplacian representation have a small Euclidean distance (The “Urban Traffic Planner Fantasy Transformation”) Quickly gives you a similarity measure on really non-Euclidean data. Questions: Under which metrics is it equivalent to multidimensional scaling? Is it worthwhile going the other way and constructing density estimates from induced flow graphs?
Clustering as matrix factorisation
If I know me, I might be looking at this page trying remember which papers situate k-means-type clustering in matrix factorisation literature.
The single-serve paper doing that is Bauckhage (2015), but there are broader versions in (Singh and Gordon 2008; Türkmen 2015), some computer science connections in Mixon, Villar, and Ward (2016), and an older one in Zass and Shashua (2005).
Further things I might discuss here are the graph-flow/Laplacian notions of clustering and the density/centroids approach. I will discuss that under mixture models
Auvolat, Alex, and Pascal Vincent. 2015. “Clustering Is Efficient for Approximate Maximum Inner Product Search,” July. http://arxiv.org/abs/1507.05910.
Bach, Francis R., and Michael I. Jordan. 2006. “Learning Spectral Clustering, with Application to Speech Separation.” Journal of Machine Learning Research 7 (Oct): 1963–2001. http://www.jmlr.org/papers/v7/bach06b.html.
Batson, Joshua, Daniel A. Spielman, and Nikhil Srivastava. 2008. “Twice-Ramanujan Sparsifiers,” August. http://arxiv.org/abs/0808.0163.
Bauckhage, Christian. 2015. “K-Means Clustering Is Matrix Factorization,” December. http://arxiv.org/abs/1512.07548.
Belkin, Mikhail, and Partha Niyogi. 2003. “Laplacian Eigenmaps for Dimensionality Reduction and Data Representation.” Neural Computation 15 (6): 1373–96. https://doi.org/10.1162/089976603321780317.
Clauset, Aaron. 2005. “Finding Local Community Structure in Networks.”
Clauset, Aaron, Mark E J Newman, and Cristopher Moore. 2004. “Finding Community Structure in Very Large Networks.” Physical Review E 70 (6): 066111. https://doi.org/10.1103/PhysRevE.70.066111.
Ding, C., X. He, and H. Simon. 2005. “On the Equivalence of Nonnegative Matrix Factorization and Spectral Clustering.” In Proceedings of the 2005 SIAM International Conference on Data Mining, 606–10. Proceedings. Society for Industrial and Applied Mathematics. http://ranger.uta.edu/~chqding/papers/NMF-SDM2005.pdf.
Donoho, David L., and Carrie Grimes. 2003. “Hessian Eigenmaps: Locally Linear Embedding Techniques for High-Dimensional Data.” Proceedings of the National Academy of Sciences 100 (10): 5591–6. https://doi.org/10.1073/pnas.1031596100.
Dueck, Delbert, Quaid D. Morris, and Brendan J. Frey. 2005. “Multi-Way Clustering of Microarray Data Using Probabilistic Sparse Matrix Factorization.” Bioinformatics 21 (suppl 1): i144–i151. https://doi.org/10.1093/bioinformatics/bti1041.
Elhamifar, E., and R. Vidal. 2013. “Sparse Subspace Clustering: Algorithm, Theory, and Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (11): 2765–81. https://doi.org/10.1109/TPAMI.2013.57.
Fung, Wai Shing, Ramesh Hariharan, Nicholas J. A. Harvey, and Debmalya Panigrahi. 2011. “A General Framework for Graph Sparsification.” In Proceedings of the Forty-Third Annual ACM Symposium on Theory of Computing, 71–80. STOC ’11. New York, NY, USA: ACM. https://doi.org/10.1145/1993636.1993647.
Hallac, David, Jure Leskovec, and Stephen Boyd. 2015. “Network Lasso: Clustering and Optimization in Large Graphs,” July. https://doi.org/10.1145/2783258.2783313.
He, Xiaofei, and Partha Niyogi. 2003. “Locality Preserving Projections.” In Proceedings of the 16th International Conference on Neural Information Processing Systems, 16:153–60. NIPS’03. Cambridge, MA, USA: MIT Press. https://papers.nips.cc/paper/2359-locality-preserving-projections.pdf.
Huang, G., M. Kaess, and J. J. Leonard. 2013. “Consistent Sparsification for Graph Optimization.” In 2013 European Conference on Mobile Robots (ECMR), 150–57. https://doi.org/10.1109/ECMR.2013.6698835.
Keogh, Eamonn, and Jessica Lin. 2004. “Clustering of Time-Series Subsequences Is Meaningless: Implications for Previous and Future Research.” Knowledge and Information Systems 8 (2): 154–77. https://doi.org/10.1007/s10115-004-0172-7.
Luxburg, Ulrike von. 2007. “A Tutorial on Spectral Clustering.”
Masuda, Naoki, Mason A. Porter, and Renaud Lambiotte. 2016. “Random Walks and Diffusion on Networks,” December. http://arxiv.org/abs/1612.03281.
Mixon, Dustin G., Soledad Villar, and Rachel Ward. 2016. “Clustering Subgaussian Mixtures by Semidefinite Programming,” February. http://arxiv.org/abs/1602.06612.
Mohler, George. 2013. “Modeling and Estimation of Multi-Source Clustering in Crime and Security Data.” The Annals of Applied Statistics 7 (3): 1525–39. https://doi.org/10.1214/13-AOAS647.
Newman, Mark E J. 2004. “Detecting Community Structure in Networks.” The European Physical Journal B - Condensed Matter and Complex Systems 38 (2): 321–30. https://doi.org/10.1140/epjb/e2004-00124-y.
Peng, J., and Y. Wei. 2007. “Approximating K‐means‐type Clustering via Semidefinite Programming.” SIAM Journal on Optimization 18 (1): 186–205. https://doi.org/10.1137/050641983.
Pourkamali-Anaraki, Farhad, and Stephen Becker. 2016a. “A Randomized Approach to Efficient Kernel Clustering,” August. http://arxiv.org/abs/1608.07597.
———. 2016b. “Randomized Clustered Nystrom for Large-Scale Kernel Machines,” December. http://arxiv.org/abs/1612.06470.
Schaeffer, S E. 2007. “Graph Clustering.” Computer Science Review 1 (1): 27–64. https://doi.org/10.1016/j.cosrev.2007.05.001.
Schölkopf, Bernhard, Phil Knirsch, Alex Smola, and Chris Burges. 1998. “Fast Approximation of Support Vector Kernel Expansions, and an Interpretation of Clustering as Approximation in Feature Spaces.” In Mustererkennung 1998, edited by Paul Levi, Michael Schanz, Rolf-Jürgen Ahlers, and Franz May, 125–32. Informatik Aktuell. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-72282-0_12.
Shamir, Ohad, and Naftali Tishby. 2011. “Spectral Clustering on a Budget.” Journal of Machine Learning Research. http://jmlr.org/proceedings/papers/v15/shamir11a.html.
Singh, Ajit P., and Geoffrey J. Gordon. 2008. “A Unified View of Matrix Factorization Models.” In Machine Learning and Knowledge Discovery in Databases, 358–73. Springer. https://www.select.cs.cmu.edu/publications/paperdir/ecml2008-singh-gordon.pdf.
Slonim, Noam, Gurinder S Atwal, Gašper Tkačik, and William Bialek. 2005. “Information-Based Clustering.” Proceedings of the National Academy of Sciences of the United States of America 102: 18297–18302. https://doi.org/10.1073/pnas.0507432102.
Spielman, Daniel A., and Shang-Hua Teng. 2004. “Nearly-Linear Time Algorithms for Graph Partitioning, Graph Sparsification, and Solving Linear Systems.” In Proceedings of the Thirty-Sixth Annual ACM Symposium on Theory of Computing, 81–90. STOC ’04. New York, NY, USA: ACM. https://doi.org/10.1145/1007352.1007372.
———. 2008. “A Local Clustering Algorithm for Massive Graphs and Its Application to Nearly-Linear Time Graph Partitioning,” September. http://arxiv.org/abs/0809.3232.
Spielman, D., and N. Srivastava. 2011. “Graph Sparsification by Effective Resistances.” SIAM Journal on Computing 40 (6): 1913–26. https://doi.org/10.1137/080734029.
Steyvers, Mark, and Joshua B. Tenenbaum. 2005. “The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth.” Cognitive Science 29 (1): 41–78. https://doi.org/10.1207/s15516709cog2901_3.
Tschannen, Michael, and Helmut Bölcskei. 2016. “Noisy Subspace Clustering via Matching Pursuits,” December. http://arxiv.org/abs/1612.03450.
Türkmen, Ali Caner. 2015. “A Review of Nonnegative Matrix Factorization Methods for Clustering,” July. http://arxiv.org/abs/1507.03194.
Yan, Donghui, Ling Huang, and Michael I. Jordan. 2009. “Fast Approximate Spectral Clustering.” In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 907–16. KDD ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1557019.1557118.
Zass, Ron, and Amnon Shashua. 2005. “A Unifying Approach to Hard and Probabilistic Clustering.” In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1 - Volume 01, 294–301. ICCV ’05. Washington, DC, USA: IEEE Computer Society. https://doi.org/10.1109/ICCV.2005.27.
Zhang, Zhongyuan, Chris Ding, Tao Li, and Xiangsun Zhang. 2007. “Binary Matrix Factorization with Applications.” In Seventh IEEE International Conference on Data Mining, 2007. ICDM 2007, 391–400. IEEE. https://doi.org/10.1109/ICDM.2007.99.