# (Kernelized) Stein variational gradient descent

## KSVD, SVGD

Stein’s method meets variational inference via kernels and probability measures. The result is method of inference which maintains an ensemble of particles which notionally collectively sample from some target distribution. I should learn about this, as one of the methods I might use for low-assumption Bayes inference.

Let us examine the computable kernelized Stein discrepancy, invented in Q. Liu, Lee, and Jordan (2016), weaponized in Q. Liu, Lee, and Jordan (2016) and summarised in Xu and Matsuda (2021):

Let $$q$$ be a smooth probability density on $$\mathbb{R}^{d} .$$ For a smooth function $$\mathbf{f}=$$ $$\left(f_{1}, \ldots, f_{d}\right): \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$$, the Stein operator $$\mathcal{T}_{q}$$ is defined by $\mathcal{T}_{q} \mathbf{f}(x)=\sum_{i=1}^{d}\left(f_{i}(x) \frac{\partial}{\partial x^{i}} \log q(x)+\frac{\partial}{\partial x^{i}} f_{i}(x)\right)$

…Let $$\mathcal{H}$$ be a reproducing kernel Hilbert space $$(\mathrm{RKHS})$$ on $$\mathbb{R}^{d}$$ and $$\mathcal{H}^{d}$$ be its product. By using Stein operator, kernel Stein discrepancy (KSD) between two densities $$p$$ and $$q$$ is defined as $\operatorname{KSD}(p \| q)=\sup _{\|\mathbf{f}\|_{\mathcal{H}} \leq 1} \mathbb{E}_{p}\left[\mathcal{T}_{q} \mathbf{f}\right]$ It is shown that $$\operatorname{KSD}(p \| q) \geq 0$$ and $$\mathrm{KSD}(p \| q)=0$$ if and only if $$p=q$$ under mild regularity conditions . Thus, KSD is a proper discrepancy measure between densities. After some calculation, $$\operatorname{KSD}(p \| q)$$ is rewritten as $\operatorname{KSD}^{2}(p \| q)=\mathbb{E}_{x, \tilde{x} \sim p}\left[h_{q}(x, \tilde{x})\right]$ where $$h_{q}$$ does not involve $$p$$.

TBD.

## References

Alsup, Terrence, Luca Venturi, and Benjamin Peherstorfer. 2022. In Proceedings of the 2nd Mathematical and Scientific Machine Learning Conference, 93–117. PMLR.
Ambrogioni, Luca, Umut Güçlü, Yagmur Güçlütürk, Max Hinne, Eric Maris, and Marcel A. J. van Gerven. 2018. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, 2478–87. NIPS’18. USA: Curran Associates Inc.
Anastasiou, Andreas, Alessandro Barp, François-Xavier Briol, Bruno Ebner, Robert E. Gaunt, Fatemeh Ghaderinezhad, Jackson Gorham, et al. 2022. arXiv.
Chen, Peng, Keyi Wu, Joshua Chen, Thomas O’Leary-Roseberry, and Omar Ghattas. 2020. arXiv.
Chu, Casey, Kentaro Minami, and Kenji Fukumizu. 2022. In, 5.
Chwialkowski, Kacper, Heiko Strathmann, and Arthur Gretton. 2016. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, 2606–15. ICML’16. New York, NY, USA: JMLR.org.
Detommaso, Gianluca, Tiangang Cui, Alessio Spantini, Youssef Marzouk, and Robert Scheichl. 2018. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 9187–97. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
Detommaso, Gianluca, Hanne Hoitzing, Tiangang Cui, and Ardavan Alamir. 2019. arXiv:1901.07987 [Cs, Stat], May.
Feng, Yihao, Dilin Wang, and Qiang Liu. 2017. In UAI 2017. arXiv.
Gong, Chengyue, Jian Peng, and Qiang Liu. 2019. In Proceedings of the 36th International Conference on Machine Learning, 2347–56. PMLR.
Gorham, Jackson, and Lester Mackey. 2015. In Advances in Neural Information Processing Systems. Vol. 28.
Gorham, Jackson, Anant Raj, and Lester Mackey. 2020. arXiv:2007.02857 [Cs, Math, Stat], October.
Han, Jun, and Qiang Liu. 2018. In Proceedings of the 35th International Conference on Machine Learning, 1900–1908. PMLR.
Huggins, Jonathan H., Trevor Campbell, Mikołaj Kasprzak, and Tamara Broderick. 2018. arXiv:1806.10234 [Cs, Stat], June.
Ley, Christophe, Gesine Reinert, and Yvik Swan. 2017. Probability Surveys 14 (none): 1–52.
Liu, Chang, and Jun Zhu. 2018. Proceedings of the AAAI Conference on Artificial Intelligence 32 (1).
Liu, Qiang. 2016. 6.
———. 2017. arXiv.
Liu, Qiang, Jason D Lee, and Michael Jordan. 2016. In Proceedings of The 33rd International Conference on Machine Learning, 9.
Liu, Qiang, and Dilin Wang. 2018. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 31:8868–77. NIPS’18. Red Hook, NY, USA: Curran Associates Inc.
———. 2019. In Advances In Neural Information Processing Systems.
Liu, Xing, Harrison Zhu, Jean-Francois Ton, George Wynne, and Andrew Duncan. 2022. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, 2002–21. PMLR.
Pulido, Manuel, and Peter Jan van Leeuwen. 2019. Journal of Computational Physics 396 (November): 400–415.
Pulido, Manuel, Peter Jan Van Leeuwen, and Derek J. Posselt. 2019. In Computational Science – ICCS 2019. ICCS 2019. Lecture Notes in Computer Science, edited by Joao M. F. Rodrigues, Pedro J. S. Cardoso, Janio Monteiro, Roberto Lam, Valeria V. Krzhizhanovskaya, Michael H. Lees, Jack J. Dongarra, and Peter M. A. Sloot, 141–55. Faro, Portugal: Springer.
Stordal, Andreas S., Rafael J. Moraes, Patrick N. Raanes, and Geir Evensen. 2021. Mathematical Geosciences 53 (3): 375–93.
Tamang, Sagar K., Ardeshir Ebtehaj, Peter J. van Leeuwen, Dongmian Zou, and Gilad Lerman. 2021. Nonlinear Processes in Geophysics 28 (3): 295–309.
Wang, Dilin, and Qiang Liu. 2019. In Proceedings of the 36th International Conference on Machine Learning, 6576–85. PMLR.
Wang, Dilin, Ziang Tang, Chandrajit Bajaj, and Qiang Liu. 2019. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, 7836–46. Red Hook, NY, USA: Curran Associates Inc.
Wang, Dilin, Zhe Zeng, and Qiang Liu. 2018. arXiv.
Wen, Linjie, and Jinglai Li. 2022. Statistics and Computing 32 (6): 97.
Xu, Wenkai, and Takeru Matsuda. 2021. arXiv:2103.00895 [Stat], March.
Zhang, Jianyi, Ruiyi Zhang, Lawrence Carin, and Changyou Chen. 2020. In International Conference on Artificial Intelligence and Statistics, 1877–87. PMLR.
Zhuo, Jingwei, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, and Bo Zhang. 2018. In Proceedings of the 35th International Conference on Machine Learning, 6018–27. PMLR.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.