🏗🏗🏗🏗🏗

I will restructure learning on manifolds and dimensionality reduction into a more useful distinction.

You have lots of predictors in your regression model! Too many predictors You want less predictors! Maybe then it would be faster, or at least more compact. Can you throw some out, or summarise them in some sens? Also with the notion of similarity as seen in kernel tricks. What you might do to learn an index. Inducing a differential metric. Matrix factorisations and random features, random projections, high-dimensional statistics. Ultimately, this is always (at least implicitly) learning a manifold. A good dimension reduction can produce a nearly sufficient statistic for indirect inference.

## Bayes

Throwing out data in a classical Bayes context is a subtle matter, but it can be done. See Bayesian model selection.

## Learning a summary statistic

See learning summary statistics. As seen in approximate Bayes. Note this is not at all the same thing as discarding predictors; rather it is about learning a useful statistic to make inferences over some more intractable ones.

## Feature selection

Deciding whether to include or discard predictors.
This one is old and has been included in regression models for a long time.
Model selection is a classic one, and regularised sparse model selection is the surprisingly effective recent evolution.
But it continues!
FOCI is an application of an interesting new independence test (Azadkia and Chatterjee 2019) that is very much *en vogue* despite being in an area that we all thought was thoroughly mined out.

## PCA and cousins

The classic. Kernel PCA, linear algebra and probabilistic formulations. Has a nice probabilistic interpretation “for free” via the Karhunen-Loève theorem.

Matrix factorisations are a generalisation here, from rank 1 operators to higher rank operators. 🏗

There are various extensions such as additive component analysis:

We propose Additive Component Analysis (ACA), a novel nonlinear extension of PCA. Inspired by multivariate nonparametric regression with additive models, ACA fits a smooth manifold to data by learning an explicit mapping from a low-dimensional latent space to the input space, which trivially enables applications like denoising.

More interesting to me is Exponential Family PCA, which is a generalisation of PCA to non-Gaussian distributions (and I presume to non-additive relations). How does this even work? (Collins, Dasgupta, and Schapire 2001; Jun Li and Dacheng Tao 2013; Liu, Dobriban, and Singer 2017; Mohamed, Ghahramani, and Heller 2008).

## Learning a distance metric

A related notion is to learn a simpler way of quantifying, in some sense, how *similar* are two datapoints.
This usually involves learning an embedding in some low dimensional ambient space as a by-product.

### UMAP

*Uniform Manifold approximation and projection for dimension reduction* (McInnes, Healy, and Melville 2018).
Apparently super hot right now. (HT James Nichols).
Nikolay Oskolkov’s introduction is neat.
John Baez discusses
the category theoretic underpinning.

### For indexing my database

See learnable indexes.

### Locality Preserving projections

Try to preserve the nearness of points if they are connected on some (weight) graph.

\[\sum_{i,j}(y_i-y_j)^2 w_{i,j}\]

So we seen an optimal projection vector.

(requirement for sparse similarity matrix?)

### Diffusion maps

This manifold-learning technique seemed fashionable for a while. (Ronald R. Coifman and Lafon 2006; R. R. Coifman et al. 2005, 2005)

Mikhail Belkin connects this to the graph laplacian literature.

### As manifold learning

Same thing, with some different emphases and history, over at manifold learning.

### Multidimensional scaling

TDB.

### Random projection

### Stochastic neighbour embedding and other visualisation-oriented methods

These methods are designed to make high-dimensional data sets look comprehensible in low-dimensional representation.

Probabilistically preserving closeness. The height of this technique is the famous t-SNE, although as far as I understand it has been superseded by UMAP.

My colleague Ben Harwood advises:

Instead of reducing and visualising higher dimensional data with t-SNE or PCA, here are three relatively recent non-linear dimension reduction techniques that are designed for visualising high dimensional data in 2D or 3D:

- https://github.com/eamid/trimap
- https://github.com/lferry007/LargeVis
- https://github.com/lmcinnes/umap
Trimap and LargeVis are learned mappings that I would expect to be more representative of the original data than what t-SNE provides. UMAP assumes connectedness of the manifold so it’s probably less suitable for data that contains distinct clusters but otherwise still a great option.

## Autoencoder and word2vec

The “nonlinear PCA” interpretation of word2vec, I just heard from Junbin Gao.

\[L(x, x') = \|x-x\|^2=\|x-\sigma(U*\sigma*W^Tx+b)) + b')\|^2\]

TBC.

## References

*arXiv:1910.12327 [Cs, Math, Stat]*, December.

*Journal of Machine Learning Research*3 (July): 48.

*Computer Physics Communications*244 (November): 170–79.

*European Journal of Operational Research*, February.

*Proceedings of the National Academy of Sciences*102 (21): 7426–31.

*Applied and Computational Harmonic Analysis*, Special Issue: Diffusion Maps and Wavelets, 21 (1): 5–30.

*Advances in Neural Information Processing Systems*. Vol. 14. MIT Press.

*Annual Review of Statistics and Its Application*5 (1): 533–59.

*Advances in Neural Information Processing Systems*, 451–58. NIPS’05. Cambridge, MA, USA: MIT Press.

*Proceedings of the 15th International Conference on Neural Information Processing Systems*, 593–600. NIPS’02. Cambridge, MA, USA: MIT Press.

*arXiv:1412.6056 [Cs]*, December.

*2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition*, 2:1735–42.

*Science*313 (5786): 504–7.

*Proceedings of the 15th International Conference on Neural Information Processing Systems*, 857–64. NIPS’02. Cambridge, MA, USA: MIT Press.

*Neural Networks*13 (4?5): 411–30.

*IEEE Transactions on Neural Networks and Learning Systems*24 (3): 485–97.

*Journal of Computational and Graphical Statistics*30 (1): 204–19.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*42 (8): 1842–55.

*Journal of Machine Learning Research*6 (Nov): 1783–1816.

*arXiv:1402.0119 [Cs, Stat]*, February.

*Journal of Machine Learning Research*9 (Nov): 2579–2605.

*arXiv:1802.03426 [Cs, Stat]*, December.

*Advances in Neural Information Processing Systems*. Vol. 21. Curran Associates, Inc.

*Conference on Computer Vision and Pattern Recognition (CVPR)*.

*arXiv:1511.09433 [Cs, Math, Stat]*, November.

*Advances in Self-Organizing Maps and Learning Vector Quantization*, 65–74. Springer.

*arXiv:2004.05387 [Math, Stat]*, April.

*Journal of Artificial Intelligence Research*23 (1): 1–40.

*PMLR*, 412–19.

*Computational Learning Theory*, edited by Paul Fischer and Hans Ulrich Simon, 214–29. Lecture Notes in Computer Science 1572. Springer Berlin Heidelberg.

*Proceedings of the 29th International Conference on Machine Learning (ICML-12)*, 1311–18.

*arXiv:1403.2877 [Cs, q-Bio, Stat]*, March.

*PRoceedings of IJCAI, 2017*.

*Annual Review of Statistics and Its Application*5 (1): 501–32.

*Proceedings of the 26th Annual International Conference on Machine Learning*, 1113–20. ICML ’09. New York, NY, USA: ACM.

## No comments yet. Why not leave one?