Learning of manifolds

Also topological data analysis; other hip names to follow



πŸ—πŸ—πŸ—πŸ—πŸ—

I will restructure learning on manifolds and dimensionality reduction into a more useful distinction.

Berger, Daniels and Yu on manifolds in Genome search

As in β€” handling your high-dimensional, or graphical, data by trying to discover a low(er)-dimensional manifold that contains it. That is, inferring a hidden constraint that happens to have the form of a smooth surface of some low-ish dimension. related: Learning on manifolds, and if you squint at it, learnable indexes.

There are a million different versions of this. Multidimensional scaling seems to be the oldest.

Tangential aside: in dynamical systems we talk about creating high dimensional Takens embedding for state space reconstruction for arbitrary nonlinear dynamics. I imagine there are some connections between learning the lower-dimensional manifold upon which lies your data, and the higher dimensional manifold in which your data’s state space is naturally expressed. But I would not be the first person to notice this, so hopefully it’s done for me somewhere?

See also kernel methods, and functional regression, which connects via diffusion maps (Ronald R. Coifman and Lafon 2006; R. R. Coifman et al. 2005, 2005).

See also information geometry, which uses manifolds also but one implied by the parameterisation of a parametric model.

To look at: ISOMAP, Locally linear embedding, spectral embeddings, diffusion maps, multidimensional scaling…

Bioinformatics is leading to some weird use of data manifolds; see for example Berger, Daniels, and Yu (2016) for the performance implications of knowing the manifold shape for *-omics search, using compressive manifold storage based on both fractal dimension and metric entropy concepts. Also suggestive connection with fitness landscape in evolution.

Neural networks have some implicit manifolds, if you squint right. see Christopher Olahs’s visual explanation how, whose diagrams should be stolen by someone trying to explain V-C dimension.

Berger, Daniels, and Yu (2016) argue:

Manifold learning algorithms have recently played a crucial role in unsupervised learning tasks such as clustering and nonlinear dimensionality reduction […] Many such algorithms have been shown to be equivalent to Kernel PCA (KPCA) with data dependent kernels, itself equivalent to performing classical multidimensional scaling (cMDS) in a high dimensional feature space (SchΓΆlkopf et al., 1998; Williams, 2002; Bengio et al., 2004). […] Recently, it has been observed that the majority of manifold learning algorithms can be expressed as a regularized loss minimization of a reconstruction matrix, followed by a singular value truncation (Neufeld et al., 2012)

Implementations

TTK

TTK

The Topology ToolKit (TTK) is an open-source library and software collection for topological data analysis in scientific visualization.

TTK can handle scalar data defined either on regular grids or triangulations, either in 2D or in 3D. It provides a substantial collection of generic, efficient and robust implementations of key algorithms in topological data analysis. It includes:

  • For scalar data: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, topological simplification;

  • For bivariate scalar data: fibers, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces;

  • For uncertain scalar data: mandatory critical points;

scikit-learn

scikit-learn implements a grab-bag of algorithms

tapkee

C++: Tapkee. Pro-tip β€” even without coding, tapkee does a long list of nice dimensionality reduction from the CLI, some of which are explicitly manifold learners (and the rest are matrix factorisations which is not so different)

  • Locally Linear Embedding and Kernel Locally Linear Embedding (LLE/KLLE)
  • Neighborhood Preserving Embedding (NPE)
  • Local Tangent Space Alignment (LTSA)
  • Linear Local Tangent Space Alignment (LLTSA)
  • Hessian Locally Linear Embedding (HLLE)
  • Laplacian eigenmaps
  • Locality Preserving Projections
  • Diffusion map
  • Isomap and landmark Isomap
  • Multidimensional scaling and landmark Multidimensional scaling (MDS/lMDS)
  • Stochastic Proximity Embedding (SPE)
  • PCA and randomized PCA
  • Kernel PCA (kPCA)
  • t-SNE
  • Barnes-Hut-SNE

References

Arjovsky, Martin, Soumith Chintala, and LΓ©on Bottou. 2017. β€œWasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23.
Aste, Tomaso, Ruggero Gramatica, and T Di Matteo. 2012. β€œExploring Complex Networks via Topological Embedding on Surfaces.” Physical Review E 86 (3): 036109.
Aswani, Anil, Peter Bickel, and Claire Tomlin. 2011. β€œRegression on Manifolds: Estimation of the Exterior Derivative.” The Annals of Statistics 39 (1): 48–81.
Belkin, Mikhail, and Partha Niyogi. 2003. β€œLaplacian Eigenmaps for Dimensionality Reduction and Data Representation.” Neural Computation 15 (6): 1373–96.
Bengio, Yoshua, Aaron Courville, and Pascal Vincent. 2013. β€œRepresentation Learning: A Review and New Perspectives.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35: 1798–828.
Berger, Bonnie, Noah M. Daniels, and Y. William Yu. 2016. β€œComputational Biology in the 21st Century: Scaling with Compressive Algorithms.” Communications of the ACM 59 (8): 72–80.
Carlsson, Gunnar, Tigran Ishkhanov, Vin de Silva, and Afra Zomorodian. 2008. β€œOn the Local Behavior of Spaces of Natural Images.” International Journal of Computer Vision 76 (1): 1–12.
Chen, Boyuan, Kuang Huang, Sunand Raghupathi, Ishaan Chandratreya, Qiang Du, and Hod Lipson. 2022. β€œAutomated Discovery of Fundamental Variables Hidden in Experimental Data.” Nature Computational Science 2 (7): 433–42.
Chen, Minhua, J. Silva, J. Paisley, Chunping Wang, D. Dunson, and L. Carin. 2010. β€œCompressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds.” IEEE Transactions on Signal Processing 58 (12): 6140–55.
Chiu, Chih-Yi, Amorntip Prayoonwong, and Yin-Chih Liao. 2020. β€œLearning to Index for Nearest Neighbor Search.” IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (8): 1942–56.
Coifman, R. R., S. Lafon, A. B. Lee, M. Maggioni, B. Nadler, F. Warner, and S. W. Zucker. 2005. β€œGeometric Diffusions as a Tool for Harmonic Analysis and Structure Definition of Data: Diffusion Maps.” Proceedings of the National Academy of Sciences 102 (21): 7426–31.
Coifman, Ronald R., and StΓ©phane Lafon. 2006. β€œDiffusion Maps.” Applied and Computational Harmonic Analysis, Special Issue: Diffusion Maps and Wavelets, 21 (1): 5–30.
DeVore, Ronald A. 1998. β€œNonlinear Approximation.” Acta Numerica 7 (January): 51–150.
Diaconis, Persi, and David Freedman. 1984. β€œAsymptotics of Graphical Projection Pursuit.” The Annals of Statistics 12 (3): 793–815.
β€”β€”β€”. 1986. β€œOn the Consistency of Bayes Estimates.” The Annals of Statistics 14 (1): 1–26.
Donoho, David L., and Carrie Grimes. 2003. β€œHessian Eigenmaps: Locally Linear Embedding Techniques for High-Dimensional Data.” Proceedings of the National Academy of Sciences 100 (10): 5591–96.
Freund, Yoav, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. 2007. β€œLearning the Structure of Manifolds Using Random Projections.” In Advances in Neural Information Processing Systems, 473–80.
Gashler, Mike, and Tony Martinez. 2012. β€œRobust Manifold Learning with CycleCut.” Connection Science 24 (1): 57–69.
Hadsell, R., S. Chopra, and Y. LeCun. 2006. β€œDimensionality Reduction by Learning an Invariant Mapping.” In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2:1735–42.
Hall, Peter, and Ker-Chau Li. 1993. β€œOn Almost Linearity of Low Dimensional Projections from High Dimensional Data.” The Annals of Statistics 21 (2): 867–89.
Hawe, S., M. Kleinsteuber, and K. Diepold. 2013. β€œAnalysis Operator Learning and Its Application to Image Reconstruction.” IEEE Transactions on Image Processing 22 (6): 2138–50.
He, Xiaofei, and Partha Niyogi. 2003. β€œLocality Preserving Projections.” In Proceedings of the 16th International Conference on Neural Information Processing Systems, 16:153–60. NIPS’03. Cambridge, MA, USA: MIT Press.
Huckemann, Stephan F., Peter T. Kim, Ja-Yong Koo, and Axel Munk. 2010. β€œMΓΆbius Deconvolution on the Hyperbolic Plane with Application to Impedance Density Estimation.” The Annals of Statistics 38 (4): 2465–98.
Kemp, Charles, and Joshua B Tenenbaum. 2008. β€œThe Discovery of Structural Form.” Proceedings of the National Academy of Sciences 105 (31): 10687–92.
Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. β€œRandom Projections of Random Manifolds.” arXiv:1607.04331 [Cs, q-Bio, Stat], July.
Maaten, Laurens van der, and Geoffrey Hinton. 2008. β€œVisualizing Data Using t-SNE.” Journal of Machine Learning Research 9 (Nov): 2579–2605.
Moustafa, Karim Abou-, Dale Schuurmans, and Frank Ferrie. 2013. β€œLearning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning.” In Journal of Machine Learning Research, 341–56.
Mukherjee, Sayan, Qiang Wu, and Ding-Xuan Zhou. 2010. β€œLearning Gradients on Manifolds.” Bernoulli 16 (1): 181–207.
Roweis, Sam T., and Lawrence K. Saul. 2000. β€œNonlinear Dimensionality Reduction by Locally Linear Embedding.” Science 290 (5500): 2323–26.
Saul, Lawrence K., and Sam T. Roweis. 2003. β€œThink Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds.” The Journal of Machine Learning Research 4 (December): 119–55.
SchΓΆlkopf, Bernhard, Alexander Smola, and Klaus-Robert MΓΌller. 1997. β€œKernel Principal Component Analysis.” In Artificial Neural Networks β€” ICANN’97, edited by Wulfram Gerstner, Alain Germond, Martin Hasler, and Jean-Daniel Nicoud, 583–88. Lecture Notes in Computer Science. Springer Berlin Heidelberg.
β€”β€”β€”. 1998. β€œNonlinear Component Analysis as a Kernel Eigenvalue Problem.” Neural Computation 10 (5): 1299–319.
Shaw, Blake, and Tony Jebara. 2009. β€œStructure Preserving Embedding.” In Proceedings of the 26th Annual International Conference on Machine Learning, 937–44. ICML ’09. New York, NY, USA: ACM.
Shieh, Albert D., Tatsunori B. Hashimoto, and Edoardo M. Airoldi. 2011. β€œTree Preserving Embedding.” Proceedings of the National Academy of Sciences 108 (41): 16916–21.
Smola, Alex J., Robert C. Williamson, Sebastian Mika, and Bernhard SchΓΆlkopf. 1999. β€œRegularized Principal Manifolds.” In Computational Learning Theory, edited by Paul Fischer and Hans Ulrich Simon, 214–29. Lecture Notes in Computer Science 1572. Springer Berlin Heidelberg.
Song, Dongjin, and Dacheng Tao. 2010. β€œBiologically inspired feature manifold for scene classification.” IEEE transactions on image processing: a publication of the IEEE Signal Processing Society 19 (1): 174–84.
Steinke, Florian, and Matthias Hein. 2009. β€œNon-Parametric Regression Between Manifolds.” In Advances in Neural Information Processing Systems 21, 1561–68. Curran Associates, Inc.
Tenenbaum, Joshua B, Vin de Silva, and John C Langford. 2000. β€œA Global Geometric Framework for Nonlinear Dimensionality Reduction.” Science, 2000.
Wang, Boyue, Yongli Hu, Junbin Gao, Yanfeng Sun, Haoran Chen, and Baocai Yin. 2017. β€œLocality Preserving Projections for Grassmann Manifold.” In PRoceedings of IJCAI, 2017.
Weinberger, Kilian Q., Fei Sha, and Lawrence K. Saul. 2004. β€œLearning a Kernel Matrix for Nonlinear Dimensionality Reduction.” In Proceedings of the Twenty-First International Conference on Machine Learning, 106–6. ICML ’04. New York, NY, USA: ACM.
Williams, Christopher K. I. 2001. β€œOn a Connection Between Kernel PCA and Metric Multidimensional Scaling.” In Advances in Neural Information Processing Systems 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 46:675–81. MIT Press.
Wu, Qiang, Justin Guinney, Mauro Maggioni, and Sayan Mukherjee. 2010. β€œLearning Gradients: Predictive Models That Infer Geometry and Statistical Dependence.” The Journal of Machine Learning Research 11: 2175–98.
Yin, M., J. Gao, and Z. Lin. 2016. β€œLaplacian Regularized Low-Rank Representation and Its Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (3): 504–17.
Yu, Yaoliang, James Neufeld, Ryan Kiros, Xinhua Zhang, and Dale Schuurmans. 2012. β€œRegularizers Versus Losses for Nonlinear Dimensionality Reduction: A Factored View with New Convex Relaxations.” In ICML 2012.
Zhou, Tianyi, Dacheng Tao, and Xindong Wu. 2011. β€œManifold Elastic Net: A Unified Framework for Sparse Dimension Reduction.” Data Mining and Knowledge Discovery 22 (3): 340–71.
Zhu, Jun-Yan, Philipp KrΓ€henbΓΌhl, Eli Shechtman, and Alexei A. Efros. 2016. β€œGenerative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.