Matrix- and vector-valued generalizations of Gamma processes
October 14, 2019 — March 3, 2022
\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\corr}{\operatorname{Corr}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\rv}[1]{\mathsf{#1}} \renewcommand{\vrv}[1]{\vv{\rv{#1}}} \renewcommand{\disteq}{\stackrel{d}{=}} \renewcommand{\gvn}{\mid} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]
Processes that generalise Gamma processes but to take vector or matrix values.
We start by considering trivial processes which have an empty index set, i.e. multivariate gamma distributions. So here is the simplest multivariate case:
1 Vector Gamma process
How can we turn the a multivariate gamma distribution into a vector valued gamma process?
An associated Lévy process is easy. Are there any Ornstein-Uhlenbeck-type processes?
2 Ornstein-Uhlenbeck Dirichlet process
TBD. Is that what Griffin (2011) achieves.
3 Wishart processes
Wishart distributions are commonly claimed to generalise Gamma distributions, although AFAICT they are not so similar. “Wishart processes” are indeed a thing (Pfaffel 2012; Wilson and Ghahramani 2011); although the Wishart distribution is not a special case of these it seems (?). It generalises the square Bessel process, which is marginally \(\chi^2\) distributed.
4 Inverse Wishart
Does the Inverse Wishart Process relate? (Shah, Wilson, and Ghahramani 2014; Tracey and Wolpert 2018) TODO
5 HDP Matrix Gamma Process
Matrix-valued Lévy-Gamma process analogue. See (Meier, Kirch, and Meyer 2020, sec. 2), which uses the multivariate construction of Pérez-Abreu and Stelzer (2014) to construct a family of matrix-variate Gamma processes That construction is an extremely general, and somewhat abstract, and is easy to handle usually only through its Lévy measure.
5.1 AΓ Process
Meier, Kirch, and Meyer (2020) mentions a construction less general than the HDP Matrix Gamma which is nonetheless broad and quite useful. We could this of it as the tractable HDP.:
A special case of the \(\operatorname{Gamma}_{d \times d}(\alpha, \lambda)\) distribution is the so-called \(A \Gamma\) distribution, that has been considered in Pérez-Abreu and Stelzer (2014) and generalized to the Hpd setting in (Meier 2018, sec. 2.4). To elaborate, the \(A \Gamma(\eta, \omega, \Sigma)\) distribution is defined with the parameters \(\eta>d-1, \omega>0\) and \(\Sigma \in\) \(\mathcal{S}_{d}^{+}\) as the \(\operatorname{Gamma}_{d \times d}\left(\alpha_{\eta, \Sigma}, \lambda_{\Sigma}\right)\) distribution, with \[ \alpha_{\eta, \boldsymbol{\Sigma}}(d \boldsymbol{U})=|\boldsymbol{\Sigma}|^{-\eta} \operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{U}\right)^{-d \eta} \Gamma(d \eta) \tilde{\Gamma}_{d}(\eta)^{-1}|\boldsymbol{U}|^{\eta-d} d \boldsymbol{U}, \] where \(\Gamma\) denotes the Gamma function and \(\tilde{\Gamma}_{d}\) the complex multivariate Gamma function (see Mathai and Provost 2005), and \(\lambda_{\boldsymbol{\Sigma}}(\boldsymbol{U})=\operatorname{tr}\left(\boldsymbol{\Sigma}^{-1} \boldsymbol{U}\right)\). It has the advantage that for \(\boldsymbol{X} \sim A \Gamma(\eta, \omega, \Sigma)\), the formulas for mean and covariance structure are explicitly known: \[ \mathrm{E} \boldsymbol{X}=\frac{\omega}{d} \boldsymbol{\Sigma}, \quad \operatorname{Cov} \boldsymbol{X}=\frac{\omega}{d(\eta d+1)}\left(\eta \boldsymbol{I}_{d^{2}}+\boldsymbol{H}\right)(\boldsymbol{\Sigma} \otimes \boldsymbol{\Sigma}), \] where \(\boldsymbol{H}=\sum_{i, j=1}^{d} \boldsymbol{H}_{i, j} \otimes H_{j, i}\) and \(\boldsymbol{H}_{i, j}\) being the matrix having a one at \((i, j)\) and zeros elsewhere, see (Meier 2018 Lemma 2.8). Thus the \(A\Gamma\)-distribution is particularly well suited for Bayesian prior modeling if the prior knowledge is given in terms of mean and covariance structure.