Fun with determinants

Especially Jacobian determinants



Petersen and Pedersen (2012) note the standard identities:

Let \(\mathbf{A}\) be an \(n \times n\) matrix. \[ \begin{aligned} \operatorname{det}(\mathbf{A}) &=\prod_{i} \lambda_{i} \quad \lambda_{i}=\operatorname{eig}(\mathbf{A}) \\ \operatorname{det}(c \mathbf{A}) &=c^{n} \operatorname{det}(\mathbf{A}), \quad \text { if } \mathbf{A} \in \mathbb{R}^{n \times n} \\ \operatorname{det}\left(\mathbf{A}^{T}\right) &=\operatorname{det}(\mathbf{A}) \\ \operatorname{det}(\mathbf{A} \mathbf{B}) &=\operatorname{det}(\mathbf{A}) \operatorname{det}(\mathbf{B}) \\ \operatorname{det}\left(\mathbf{A}^{-1}\right) &=1 / \operatorname{det}(\mathbf{A}) \\ \operatorname{det}\left(\mathbf{A}^{n}\right) &=\operatorname{det}(\mathbf{A})^{n} \\ \operatorname{det}\left(\mathbf{I}+\mathbf{u v}^{T}\right) &=1+\mathbf{u}^{T} \mathbf{v} \end{aligned} \] For \(n=2\) : \[ \operatorname{det}(\mathbf{I}+\mathbf{A})=1+\operatorname{det}(\mathbf{A})+\operatorname{Tr}(\mathbf{A}) \] For \(n=3\) : \[ \operatorname{det}(\mathbf{I}+\mathbf{A})=1+\operatorname{det}(\mathbf{A})+\operatorname{Tr}(\mathbf{A})+\frac{1}{2} \operatorname{Tr}(\mathbf{A})^{2}-\frac{1}{2} \operatorname{Tr}\left(\mathbf{A}^{2}\right) \] For \(n=4\) : \[ \begin{aligned} \operatorname{det}(\mathbf{I}+\mathbf{A})=& 1+\operatorname{det}(\mathbf{A})+\operatorname{Tr}(\mathbf{A})+\frac{1}{2} \\ &+\operatorname{Tr}(\mathbf{A})^{2}-\frac{1}{2} \operatorname{Tr}\left(\mathbf{A}^{2}\right) \\ &+\frac{1}{6} \operatorname{Tr}(\mathbf{A})^{3}-\frac{1}{2} \operatorname{Tr}(\mathbf{A}) \operatorname{Tr}\left(\mathbf{A}^{2}\right)+\frac{1}{3} \operatorname{Tr}\left(\mathbf{A}^{3}\right) \end{aligned} \] For small \(\varepsilon\), the following approximation holds \[ \operatorname{det}(\mathbf{I}+\varepsilon \mathbf{A}) \cong 1+\operatorname{det}(\mathbf{A})+\varepsilon \operatorname{Tr}(\mathbf{A})+\frac{1}{2} \varepsilon^{2} \operatorname{Tr}(\mathbf{A})^{2}-\frac{1}{2} \varepsilon^{2} \operatorname{Tr}\left(\mathbf{A}^{2}\right) \]

For a block matrix we have For \(n=4\) : \[ \begin{aligned} \operatorname{det}(\mathbf{I}+\mathbf{A})=& 1+\operatorname{det}(\mathbf{A})+\operatorname{Tr}(\mathbf{A})+\frac{1}{2} \\ &+\operatorname{Tr}(\mathbf{A})^{2}-\frac{1}{2} \operatorname{Tr}\left(\mathbf{A}^{2}\right) \\ &+\frac{1}{6} \operatorname{Tr}(\mathbf{A})^{3}-\frac{1}{2} \operatorname{Tr}(\mathbf{A}) \operatorname{Tr}\left(\mathbf{A}^{2}\right)+\frac{1}{3} \operatorname{Tr}\left(\mathbf{A}^{3}\right) \end{aligned} \] For small \(\varepsilon\), the following approximation holds \[ \operatorname{det}(\mathbf{I}+\varepsilon \mathbf{A}) \cong 1+\operatorname{det}(\mathbf{A})+\varepsilon \operatorname{Tr}(\mathbf{A})+\frac{1}{2} \varepsilon^{2} \operatorname{Tr}(\mathbf{A})^{2}-\frac{1}{2} \varepsilon^{2} \operatorname{Tr}\left(\mathbf{A}^{2}\right) \]

References

Axler, Sheldon. 1995. β€œDown with Determinants!” The American Mathematical Monthly 102 (2): 139–54.
β€”β€”β€”. 2014. Linear Algebra Done Right. New York: Springer.
Berg, Rianne van den, Leonard Hasenclever, Jakub M. Tomczak, and Max Welling. 2018. β€œSylvester Normalizing Flows for Variational Inference.” In UAI18.
Figurnov, Mikhail, Shakir Mohamed, and Andriy Mnih. 2018. β€œImplicit Reparameterization Gradients.” In Advances in Neural Information Processing Systems 31, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, 441–52. Curran Associates, Inc.
Grathwohl, Will, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. 2018. β€œFFJORD: Free-Form Continuous Dynamics for Scalable Reversible Generative Models.” arXiv:1810.01367 [Cs, Stat], October.
Huang, Chin-Wei, David Krueger, Alexandre Lacoste, and Aaron Courville. 2018. β€œNeural Autoregressive Flows.” arXiv:1804.00779 [Cs, Stat], April.
Jankowiak, Martin, and Fritz Obermeyer. 2018. β€œPathwise Derivatives Beyond the Reparameterization Trick.” In International Conference on Machine Learning, 2235–44.
Kingma, Diederik P., Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. β€œImproving Variational Inference with Inverse Autoregressive Flow.” In Advances in Neural Information Processing Systems 29. Curran Associates, Inc.
Kingma, Diederik P., Tim Salimans, and Max Welling. 2015. β€œVariational Dropout and the Local Reparameterization Trick.” In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, 2575–83. NIPS’15. Cambridge, MA, USA: MIT Press.
Kingma, Diederik P., and Max Welling. 2014. β€œAuto-Encoding Variational Bayes.” In ICLR 2014 Conference.
Louizos, Christos, and Max Welling. 2017. β€œMultiplicative Normalizing Flows for Variational Bayesian Neural Networks.” In PMLR, 2218–27.
Massaroli, Stefano, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, and Hajime Asama. 2020. β€œStable Neural Flows.” arXiv:2003.08063 [Cs, Math, Stat], March.
Minka, Thomas P. 2000. Old and new matrix algebra useful for statistics.
Papamakarios, George, Iain Murray, and Theo Pavlakou. 2017. β€œMasked Autoregressive Flow for Density Estimation.” In Advances in Neural Information Processing Systems 30, edited by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, 2338–47. Curran Associates, Inc.
Papamakarios, George, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. 2021. β€œNormalizing Flows for Probabilistic Modeling and Inference.” Journal of Machine Learning Research 22 (57): 1–64.
Petersen, Kaare Brandt, and Michael Syskind Pedersen. 2012. β€œThe Matrix Cookbook.”
Pfau, David, and Danilo Rezende. 2020. β€œIntegrable Nonparametric Flows.” In, 7.
Ruiz, Francisco J. R., Michalis K. Titsias, and David M. Blei. 2016. β€œThe Generalized Reparameterization Gradient.” In Advances In Neural Information Processing Systems.
Seber, George A. F. 2007. A Matrix Handbook for Statisticians. Wiley.
Spantini, Alessio, Daniele Bigoni, and Youssef Marzouk. 2017. β€œInference via Low-Dimensional Couplings.” Journal of Machine Learning Research 19 (66): 2639–709.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.