Learning of manifolds

Also topological data analysis; other hip names to follow

August 19, 2014 β€” June 23, 2020

πŸ—πŸ—πŸ—πŸ—πŸ—

I will restructure learning on manifolds and dimensionality reduction into a more useful distinction.

Figure 1: Berger, Daniels and Yu on manifolds in Genome search

As in β€” handling your high-dimensional, or graphical, data by trying to discover a low(er)-dimensional manifold that contains it. That is, inferring a hidden constraint that happens to have the form of a smooth surface of some low-ish dimension. related: Learning on manifolds, and if you squint at it, learnable indexes.

There are a million different versions of this. Multidimensional scaling seems to be the oldest.

Tangential aside: in dynamical systems we talk about creating high dimensional Takens embedding for state space reconstruction for arbitrary nonlinear dynamics. I imagine there are some connections between learning the lower-dimensional manifold upon which lies your data, and the higher dimensional manifold in which your data’s state space is naturally expressed. But I would not be the first person to notice this, so hopefully it’s done for me somewhere?

See also kernel methods, and functional regression, which connects via diffusion maps (Ronald R. Coifman and Lafon 2006; R. R. Coifman et al. 2005, 2005).

See also information geometry, which uses manifolds also but one implied by the parameterisation of a parametric model.

To look at: ISOMAP, Locally linear embedding, spectral embeddings, diffusion maps, multidimensional scaling…

Bioinformatics is leading to some weird use of data manifolds; see for example Berger, Daniels, and Yu (2016) for the performance implications of knowing the manifold shape for *-omics search, using compressive manifold storage based on both fractal dimension and metric entropy concepts. Also suggestive connection with fitness landscape in evolution.

Neural networks have some implicit manifolds, if you squint right. see Christopher Olahs’s visual explanation how, whose diagrams should be stolen by someone trying to explain V-C dimension.

Berger, Daniels, and Yu (2016) argue:

Manifold learning algorithms have recently played a crucial role in unsupervised learning tasks such as clustering and nonlinear dimensionality reduction […] Many such algorithms have been shown to be equivalent to Kernel PCA (KPCA) with data dependent kernels, itself equivalent to performing classical multidimensional scaling (cMDS) in a high dimensional feature space (SchΓΆlkopf et al., 1998; Williams, 2002; Bengio et al., 2004). […] Recently, it has been observed that the majority of manifold learning algorithms can be expressed as a regularized loss minimization of a reconstruction matrix, followed by a singular value truncation (Neufeld et al., 2012)

1 Implementations

1.1 TTK

TTK

The Topology ToolKit (TTK) is an open-source library and software collection for topological data analysis in scientific visualization.

TTK can handle scalar data defined either on regular grids or triangulations, either in 2D or in 3D. It provides a substantial collection of generic, efficient and robust implementations of key algorithms in topological data analysis. It includes:

  • For scalar data: critical points, integral lines, persistence diagrams, persistence curves, merge trees, contour trees, Morse-Smale complexes, topological simplification;

  • For bivariate scalar data: fibers, fiber surfaces, continuous scatterplots, Jacobi sets, Reeb spaces;

  • For uncertain scalar data: mandatory critical points;

1.2 scikit-learn

scikit-learn implements a grab-bag of algorithms

1.3 tapkee

C++: Tapkee. Pro-tip β€” even without coding, tapkee does a long list of nice dimensionality reduction from the CLI, some of which are explicitly manifold learners (and the rest are matrix factorisations which is not so different)

  • Locally Linear Embedding and Kernel Locally Linear Embedding (LLE/KLLE)
  • Neighborhood Preserving Embedding (NPE)
  • Local Tangent Space Alignment (LTSA)
  • Linear Local Tangent Space Alignment (LLTSA)
  • Hessian Locally Linear Embedding (HLLE)
  • Laplacian eigenmaps
  • Locality Preserving Projections
  • Diffusion map
  • Isomap and landmark Isomap
  • Multidimensional scaling and landmark Multidimensional scaling (MDS/lMDS)
  • Stochastic Proximity Embedding (SPE)
  • PCA and randomized PCA
  • Kernel PCA (kPCA)
  • t-SNE
  • Barnes-Hut-SNE

2 References

Arjovsky, Chintala, and Bottou. 2017. β€œWasserstein Generative Adversarial Networks.” In International Conference on Machine Learning.
Aste, Gramatica, and Di Matteo. 2012. β€œExploring Complex Networks via Topological Embedding on Surfaces.” Physical Review E.
Aswani, Bickel, and Tomlin. 2011. β€œRegression on Manifolds: Estimation of the Exterior Derivative.” The Annals of Statistics.
Belkin, and Niyogi. 2003. β€œLaplacian Eigenmaps for Dimensionality Reduction and Data Representation.” Neural Computation.
Bengio, Courville, and Vincent. 2013. β€œRepresentation Learning: A Review and New Perspectives.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Berger, Daniels, and Yu. 2016. β€œComputational Biology in the 21st Century: Scaling with Compressive Algorithms.” Communications of the ACM.
Carlsson, Ishkhanov, Silva, et al. 2008. β€œOn the Local Behavior of Spaces of Natural Images.” International Journal of Computer Vision.
Chen, Boyuan, Huang, Raghupathi, et al. 2022. β€œAutomated Discovery of Fundamental Variables Hidden in Experimental Data.” Nature Computational Science.
Chen, Minhua, Silva, Paisley, et al. 2010. β€œCompressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds.” IEEE Transactions on Signal Processing.
Chiu, Prayoonwong, and Liao. 2020. β€œLearning to Index for Nearest Neighbor Search.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Coifman, Ronald R., and Lafon. 2006. β€œDiffusion Maps.” Applied and Computational Harmonic Analysis, Special Issue: Diffusion Maps and Wavelets,.
Coifman, R. R., Lafon, Lee, et al. 2005. β€œGeometric Diffusions as a Tool for Harmonic Analysis and Structure Definition of Data: Diffusion Maps.” Proceedings of the National Academy of Sciences.
DeVore. 1998. β€œNonlinear Approximation.” Acta Numerica.
Diaconis, and Freedman. 1984. β€œAsymptotics of Graphical Projection Pursuit.” The Annals of Statistics.
β€”β€”β€”. 1986. β€œOn the Consistency of Bayes Estimates.” The Annals of Statistics.
Donoho, and Grimes. 2003. β€œHessian Eigenmaps: Locally Linear Embedding Techniques for High-Dimensional Data.” Proceedings of the National Academy of Sciences.
Freund, Dasgupta, Kabra, et al. 2007. β€œLearning the Structure of Manifolds Using Random Projections.” In Advances in Neural Information Processing Systems.
Gashler, and Martinez. 2012. β€œRobust Manifold Learning with CycleCut.” Connection Science.
Hadsell, Chopra, and LeCun. 2006. β€œDimensionality Reduction by Learning an Invariant Mapping.” In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
Hall, and Li. 1993. β€œOn Almost Linearity of Low Dimensional Projections from High Dimensional Data.” The Annals of Statistics.
Hawe, Kleinsteuber, and Diepold. 2013. β€œAnalysis Operator Learning and Its Application to Image Reconstruction.” IEEE Transactions on Image Processing.
He, and Niyogi. 2003. β€œLocality Preserving Projections.” In Proceedings of the 16th International Conference on Neural Information Processing Systems. NIPS’03.
Huckemann, Kim, Koo, et al. 2010. β€œMΓΆbius Deconvolution on the Hyperbolic Plane with Application to Impedance Density Estimation.” The Annals of Statistics.
Kemp, and Tenenbaum. 2008. β€œThe Discovery of Structural Form.” Proceedings of the National Academy of Sciences.
Lahiri, Gao, and Ganguli. 2016. β€œRandom Projections of Random Manifolds.” arXiv:1607.04331 [Cs, q-Bio, Stat].
Moustafa, Schuurmans, and Ferrie. 2013. β€œLearning a Metric Space for Neighbourhood Topology Estimation: Application to Manifold Learning.” In Journal of Machine Learning Research.
Mukherjee, Wu, and Zhou. 2010. β€œLearning Gradients on Manifolds.” Bernoulli.
Roweis, and Saul. 2000. β€œNonlinear Dimensionality Reduction by Locally Linear Embedding.” Science.
Saul, and Roweis. 2003. β€œThink Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds.” The Journal of Machine Learning Research.
SchΓΆlkopf, Smola, and MΓΌller. 1997. β€œKernel Principal Component Analysis.” In Artificial Neural Networks β€” ICANN’97. Lecture Notes in Computer Science.
β€”β€”β€”. 1998. β€œNonlinear Component Analysis as a Kernel Eigenvalue Problem.” Neural Computation.
Shaw, and Jebara. 2009. β€œStructure Preserving Embedding.” In Proceedings of the 26th Annual International Conference on Machine Learning. ICML ’09.
Shieh, Hashimoto, and Airoldi. 2011. β€œTree Preserving Embedding.” Proceedings of the National Academy of Sciences.
Smola, Williamson, Mika, et al. 1999. β€œRegularized Principal Manifolds.” In Computational Learning Theory. Lecture Notes in Computer Science 1572.
Song, and Tao. 2010. β€œBiologically inspired feature manifold for scene classification.” IEEE transactions on image processing: a publication of the IEEE Signal Processing Society.
Steinke, and Hein. 2009. β€œNon-Parametric Regression Between Manifolds.” In Advances in Neural Information Processing Systems 21.
Tenenbaum, de Silva, and Langford. 2000. β€œA Global Geometric Framework for Nonlinear Dimensionality Reduction.” Science.
van der Maaten, and Hinton. 2008. β€œVisualizing Data Using t-SNE.” Journal of Machine Learning Research.
Wang, Hu, Gao, et al. 2017. β€œLocality Preserving Projections for Grassmann Manifold.” In PRoceedings of IJCAI, 2017.
Weinberger, Sha, and Saul. 2004. β€œLearning a Kernel Matrix for Nonlinear Dimensionality Reduction.” In Proceedings of the Twenty-First International Conference on Machine Learning. ICML ’04.
Williams. 2001. β€œOn a Connection Between Kernel PCA and Metric Multidimensional Scaling.” In Advances in Neural Information Processing Systems 13.
Wu, Guinney, Maggioni, et al. 2010. β€œLearning Gradients: Predictive Models That Infer Geometry and Statistical Dependence.” The Journal of Machine Learning Research.
Yin, Gao, and Lin. 2016. β€œLaplacian Regularized Low-Rank Representation and Its Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Yu, Neufeld, Kiros, et al. 2012. β€œRegularizers Versus Losses for Nonlinear Dimensionality Reduction: A Factored View with New Convex Relaxations.” In ICML 2012.
Zhou, Tao, and Wu. 2011. β€œManifold Elastic Net: A Unified Framework for Sparse Dimension Reduction.” Data Mining and Knowledge Discovery.
Zhu, KrΓ€henbΓΌhl, Shechtman, et al. 2016. β€œGenerative Visual Manipulation on the Natural Image Manifold.” In Proceedings of European Conference on Computer Vision.