\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\corr}{\operatorname{Corr}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\rv}[1]{\mathsf{#1}} \renewcommand{\vrv}[1]{\vv{\rv{#1}}} \renewcommand{\disteq}{\stackrel{d}{=}} \renewcommand{\gvn}{\mid} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]
Processes with Gamma marginals. Usually when we discuss Gamma processes we mean Gamma-Lévy processes. Such processes have independent Gamma increments, much like a Wiener process has independent Gaussian increments and a Poisson process has independent Poisson increments. Gamma processes provide the classic subordinator models, i.e. non-decreasing Lévy processes.
There are other processes with Gamma marginals. Much like the Gaussian process family includes many processes with Gaussian marginals, so does the Gamma. It has a different set of natural algebraic relations to the Gaussian process. For example, the class of Gaussian processes is closed under addition and multiplication. The class of Gamma processes is closed under addition and thinning and some other weirder operations, all of which requires little more background knowledge to understand. It turns out there are complications with multivariate Gamma processes so those are handled separately.
Gamma processes are a natural model for spiky things
Gamma distributions and processes and such crop up all over the place. See also Pólya-Gamma distribution.
OK but if a process’s marginals are “Gamma-distributed”, what does that even mean? First, go and read about Gamma distributions. From that we can construct the Lévy Gamma process which is usually what we mean when we talk about Gamma processes. However, there are many more processes that we can construct with Gamma marginals; those others are here.
THEN go and read about Beta and Dirichlet distributions. and the Gamma-Beta notebook.
Now we are ready to look at stationary dependent Gamma processes.
There are Ornstein–Uhlenbeck-type constructions for Gamma processes (Gaver and Lewis 1980). See R. L. Wolpert (2021) for a modern summary and overview of several popular alternatives.
For fixed \(\alpha, \lambda>0\) these notes present six different stationary time series, each with Gamma \(\rv{g}(t) \sim \operatorname{Gamma}(\alpha, \lambda)\) univariate marginal distributions and autocorrelation function \(\rho^{|s-t|}\) for \(\rv{g}(s), \rv{g}(t)\). Each will be defined on some time index set \(\mathcal{T}\), either \(\mathcal{T}=\mathbb{\rv{z}}\) or \(\mathcal{T}=\mathbb{R}\).
Five of the six constructions can be applied to other Infinitely Divisible (ID) distributions as well, both continuous ones (normal, \(\alpha\)-stable, etc.) and discrete (Poisson, negative binomial, etc). For specifically the Poisson and Gaussian distributions, all but one of them (the Markov change-point construction) coincide — essentially, there is just one “AR(1)-like” Gaussian process (namely, the \(A R(1)\) process in discrete time, or the Ornstein-Uhlenbeck process in continuous time), and there is just one \(A R(1)\)-like Poisson process. For other ID distributions, however, and in particular for the Gamma, each of these constructions yields a process with the same univariate marginal distributions and the same autocorrelation but with different joint distributions at three or more times.
Thinned Autoregressive Gamma
To my mind the most natural one.
We let \[ \rv{g}(0) \sim \operatorname{Gamma}(\alpha, \lambda) \] and, for \(t \in \mathbb{N}\) set \[ \rv{g}(t):=\xi(t)+\zeta(t) \] where \[ \begin{aligned} &\xi(t):=\rv{b}(t) \cdot \rv{g}(t-1), \quad \rv{b}(t) \sim \operatorname{Beta}(\alpha \rho, \alpha \bar{\rho}) \\ &\zeta(t) \sim \operatorname{Gamma}(\alpha \bar{\rho}, \lambda) \end{aligned} \] where \(\bar{\rho}:=(1-\rho)\) and all the \(\left\{\rv{b}(t)\right\}\) and \(\left\{\zeta(t)\right\}\) are independent. Then,\(\xi(t) \sim \operatorname{Gamma}(\alpha \rho, \lambda)\) and \(\zeta(t) \sim \operatorname{Gamma}(\alpha \bar{\rho}, \lambda)\) are independent, with sum \(\rv{g}(t) \sim\) \(\operatorname{Gamma}(\alpha, \lambda)\). Thus \(\left\{\rv{g}(t)\right\}\) is a Markov process with Gamma univariate marginal distribution \(\rv{g}(t) \sim \operatorname{Gamma}(\alpha, \lambda)\), now with joint characteristic function \[ \begin{aligned} \chi(s, t) &=\mathbb{E}\exp\left(i s \rv{g}(0)+i t \rv{g}(1)\right) \\ &=\mathbb{E}\exp\left\{i s\left(\rv{g}(0)-\xi(1)\right)+i(s+t) \xi(1)+i t \zeta(1)\right\} \\ &=(1-i s / \lambda)^{-\alpha \bar{\rho}}(1-i(s+t) / \lambda)^{-\alpha \rho}(1-i t / \lambda)^{-\alpha \bar{\rho}} \end{aligned} \] Note that unlike the autoregressive construction, this characteristic function of this one is symmetric in the time arguments, and therefore the process is time-reversible. In some senses this is a “more natural” autoregressive process than the Zeta-innovation AR(1) process. For one, it is easy to imagine how to generalize this to vector autoregressive processes. For another, there is a natural generalization to continuous time (R. L. Wolpert 2021, 2.6) using the Beta process in the sense of Hjort (1990) and Thibaux and Jordan (2007).
What does this look like in practice?
set.seed(105)
# generate a stationary thinned autoregressive Gamma series
gamp = function(T, alpha, lambda, rho) {
g = rgamma(1, alpha, rate=lambda)
b = rbeta(T, alpha*rho, alpha*(1-rho))
zeta = rgamma(T, alpha*(1-rho), rate=lambda)
gs = numeric(T)
for (i in 1:T) {
g = b[i] * g + zeta[i]
gs[i] = g
}
gs
}
T = 10000
ts = (1:T)/100
plot(ts, gamp(T, 1.0, 0.1, 0.999),
type = "l", col = 2,
ylim = c(0, 25), ylab="", xlab = "time")
lines(ts, gamp(T, 10, 1.0, 0.999),
type = "l", col = 3)
lines(ts, gamp(T, 100, 10.0, 0.999),
type = "l", col = 4)
legend("topright",
c("lambda=0.1", "lambda=1", "lambda=10"),
lty = 1, col = 2:4)
Additive Zeta innovations
Fix \(0 \leq \rho<1\). Let \(\rv{g}(0) \sim \operatorname{Gamma}(\alpha, \lambda)\) and for \(t \in \mathbb{N}\) define \(\rv{g}(t)\) recursively by \[ \rv{g}(t):=\rho \rv{g}(t-1)+\zeta(t) \] for iid \(\left\{\zeta(t)\right\}\) (see Zeta distribution). The process \(\left\{\rv{g}(t)\right\}\) has Gamma univariate marginal distribution \(\rv{g}(t) \sim \operatorname{Gamma}(\alpha, \lambda)\) for every \(t \in \mathbb{R}_{+}\) and, at consecutive times \(s,t\) joint characteristic function \[ \begin{aligned} \chi(s, t) &=\operatorname{E} \exp \left(i s \rv{g}(0)+i t \rv{g}(1)\right) \\ &=\operatorname{E} \exp \left(i(s+\rho t) \rv{g}(0)+i t \zeta(1)\right) \\ &=\left[\frac{(1-i(s+\rho t) / \lambda)(1-i t / \lambda)}{1-i t \rho / \lambda}\right]^{-\alpha}. \end{aligned} \] Unlike Gaussian additive autoregressive processes, where the marginal and innovation processes are both Gaussian, in Gamma additive autoregressive processes the marginal is Gamma but the innovation is not (Lawrance 1982; Walker 2000). We can get a process that has a gamma innovation by the next construction instead.
Exercise: Generalise this to continuous time.
Gamma-Zeta distribution
I don’t know a name for the distribution of the \(\zeta(t)\) RVs from earlier. Let us go with Gamma-Zeta, because plain Zeta is taken.
It is easiest to describe that RV in terms of the characteristic function \(E e^{i \omega \zeta(t)}=(1-i \omega / \lambda)^{-\alpha}(1-i \rho \omega / \lambda)^{\alpha}=\left[\frac{\lambda-i \omega}{\lambda-i \rho \omega}\right]^{-\alpha}.\)
Simulating such RVs is easy via the algorithm of Walker (2000):
\[\lambda(t) \sim \operatorname{Gamma}(\alpha, 1), \quad N(t)|\lambda(t) \sim \mathrm{Po}\left(\frac{1-\rho}{\rho} \lambda(t)\right), \quad \zeta(t)| N(t) \sim \operatorname{Gamma}\left(N(t), \frac{\lambda}{\rho}\right).\]
However, this distribution does not seem to have an obvious density except as a Fourier transform. Let us set is aside for now, eh?
Change-point gamma
Also from R. L. Wolpert (2021). What other marginals than Gamma can I construct with this?
Let \(\left\{\zeta_{n}: n \in \mathbb{Z}\right\} \stackrel{\text { iid }}{\sim} \mathrm{Ga}(\alpha, \beta)\) be iid Gamma random variables and let \(N_{t}\) be a standard Poisson process indexed by \(t \in \mathbb{R}\) (so \(N_{0}=0\) and \(\left(N_{t}-N_{s}\right) \sim \mathrm{Po}(t-s)\) for all \(-\infty<s<\) \(t<\infty\), with independent increments), and set \[ X_{t}:=\zeta_{n}, \quad n=N_{\lambda t} \] Then each \(X_{t} \sim \mathrm{Ga}(\alpha, \beta)\) and, for \(s, t \in \mathbb{R}, X_{s}\) and \(X_{t}\) are either identical (with probability \(\left.\rho^{|s-t|}\right)\) or independent- reminiscent of a Metropolis MCMC chain. The chf is \[ \begin{aligned} \chi(s, t) &=\mathrm{E} \exp \left(i s X_{0}+i t X_{1}\right) \\ &=\rho(1-i(s+t) / \beta)^{-\alpha}+\bar{\rho}(1-i s / \beta)^{-\alpha}(1-i t / \beta)^{-\alpha} \end{aligned} \] and once again the marginal distribution is \(X_{t} \sim \mathrm{Ga}(\alpha, \beta)\) and the autocorrelation function is \(\operatorname{Corr}\left(X_{s}, X_{t}\right)=\rho^{|s-t|}\).
No comments yet. Why not leave one?