The intersection of reproducing kernel methods, dependence tests and probability metrics; where you use a clever RKHS embedding to measure differences between probability distributions.

A mere placeholder for now.

This abstract by Zoltán Szabó might serve to highlight some keywords.

Maximum mean discrepancy (MMD) and Hilbert-Schmidt independence criterion (HSIC) are among the most popular and successful approaches in applied mathematics to measure the difference and the independence of random variables, respectively. Thanks to their kernel-based foundations, MMD and HSIC are applicable on a large variety of domains such as documents, images, trees, graphs, time series, dynamical systems, sets or permutations. Despite their tremendous practical success, quite little is known about when HSIC characterizes independence and MMD with tensor kernel can discriminate probability distributions, in terms of the contributing kernel components. In this talk, I am going to provide a complete answer to this question, with conditions which are often easy to verify in practice.

Joint work with Bharath K. Sriperumbudur (PSU).

Gaël Varoquaux’ introduction is friednly and illustrated, Comparing distributions: Kernels estimate good representations, l1 distances give good tests based on (scetbon and Varoquaux 2019).

See the ITE toolbox (estimators).

Husain (2020)’s results connect IPMs to transport metrics and regularisation theory, and classification.

## References

*arXiv:2202.04744 [Cs, Stat]*, February.

*Advances in Neural Information Processing Systems 20: Proceedings of the 2007 Conference*. Cambridge, MA: MIT Press.

*arXiv:2006.04349 [Cs, Stat]*, June.

*Linear Integral Equations*. Third edition. Applied Mathematical Sciences, volume 82. New York: Springer.

*arXiv:1405.5505 [Cs, Stat]*, May.

*Foundations and Trends® in Machine Learning*10 (1-2): 1–141.

*The Journal of Machine Learning Research*17 (1): 6240–67.

*Handbook of Integral Equations*. Boca Raton, Fla: CRC Press.

*arXiv:0906.1244 [Cs, Math]*.

*Journal of Machine Learning Research*12 (Mar): 731–817.

*arXiv:1901.03227 [Cs, Stat]*, January.

*Advances in Neural Information Processing Systems 32*, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett, 12306–16. Curran Associates, Inc.

*arXiv:1501.06794 [Cs, Stat]*, January.

*The Annals of Statistics*41 (5): 2263–91.

*Algorithmic Learning Theory*, edited by Marcus Hutter, Rocco A. Servedio, and Eiji Takimoto, 13–31. Lecture Notes in Computer Science 4754. Springer Berlin Heidelberg.

*Proceedings of the 26th Annual International Conference on Machine Learning*, 961–68. ICML ’09. New York, NY, USA: ACM.

*Proceedings of the 21st Annual Conference on Learning Theory (COLT 2008)*.

*Electronic Journal of Statistics*6: 1550–99.

*Journal of Machine Learning Research*11 (April): 1517−1561.

*arXiv:1702.03877 [Stat]*, February.

*arXiv:1708.08157 [Cs, Math, Stat]*, August.

*Advances in Neural Information Processing Systems 29*, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 1930–38. Curran Associates, Inc.

*Integral Equations*. New York: Dover Publications.

*arXiv:1202.3775 [Cs, Stat]*, February.

*arXiv:1606.07892 [Stat]*, June.

## No comments yet. Why not leave one?