Learning summary statistics



Neural learning of exchangeable functions, a certain type of symmetry constraint or respecting projectivity. Why might we do this? Well, for example, a learnable statistic of i.i.d observations is an exchangeable function of the data.

TBD

Permutation invariant observations

Murphy et al. (2022); Sainsbury-Dale, Zammit-Mangion, and Huser (2022); Zaheer et al. (2017).

Great explainer: DeepSets: Modeling Permutation Invariance

References

Murphy, Ryan L., Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. 2022. β€œJanossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs.” In.
Sainsbury-Dale, Matthew, Andrew Zammit-Mangion, and RaphaΓ«l Huser. 2022. β€œFast Optimal Estimation with Intractable Models Using Permutation-Invariant Neural Networks.” arXiv.
Wagstaff, Edward, Fabian B. Fuchs, Martin Engelcke, Ingmar Posner, and Michael Osborne. 2019. β€œOn the Limitations of Representing Functions on Sets.” arXiv.
Zaheer, Manzil, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. 2017. β€œDeep Sets.” In Advances in Neural Information Processing Systems. Vol. 30. Curran Associates, Inc.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.