Gamma distributions
October 14, 2019 — July 28, 2024
The density
If
The Gamma distribution has lots of neat properties, such as divisibility. Some more are outlined in the Gamma-Dirichlet algebra section.
1 All moments
The moment generating function of the Gamma distribution is
Here
2 As exponential family
The gamma distribution is a two-parameter exponential family with natural parameters
The log-partition function
In terms of the natural parameters, the mean and variance are
2.1 Tempered
A
So if we weight the density down by
3 Linear combinations of Gammas
Is the Gamma family closed under addition? For fixed scale/rate parameters, yes. See Gamma-Dirichlet algebra.
If we are summing gamma variates which differ in the rate parameter
Note that multiplying a gamma RV by a scalar changes the rate, so gamma variates are not closed under affine combination as Gaussians ones are. The moral is that we cannot assume the convenient algebra of linear combination additivity as in the Gaussian process community. What kind of algebra do we get?
4 Gamma-Dirichlet algebra
There are various operations which give us similar conveniences, however. For those, we need to also be aware of the Dirichlet and Beta distributions. Here are some useful properties, drawn from, or extrapolated from, Dufresne (1998), Lin (2016), and Pérez-Abreu and Stelzer (2014) which use those properties.
First, we fix some notation. From here on, all variables denoted
4.1 Superposition
4.2 Multiplication
If
4.3 Beta Thinning
The Gamma-bridge construction arises from this thinning procedure.
4.4 Dirichlet thinning
Grab a set of independent Gamma rvs,
Conversely, take some arbitrary
4.5 Beta thickening
Grab a set of independent Gamma rvs,
TODO: Check this. Also, is it actually useful? I thought it was for coupling Gamma processes, but it turned out not to be necessary in my construction.
4.6 Other
There are many other nice properties and relations.
The properties I include in this section fail to define a formal algebraic structure, but they do define a bunch of operations that preserve membership of a certain distributional family, or pretty close to. 差不多. We can define the sets and operations if we really need an algebra.
Thematically, the operations that arise most often in this Gamma-“algebra” are not quite the same as in the Gaussian process “algebra”. In that case we are usually concerned with linear algebras in that many linear operations on many objects which are Gaussian in a very broad sense still end up being Gaussian and possessed of a closed-form solution. In this case we are mostly concerned with different operations, addition yes, but also thinning (Steutel and van Harn 2003) rather than multiplication.
Yor (2007) talks about the Gamma-Beta algebra of Dufresne (1998) which relates certain Markov chains of Gamma distribution and Beta distributions. Dufresne (1998)’s construction is a formal algebra, although one that I only pull a couple of trivial cases from. Read that paper for more than the following taster:
For any
For more, see the Gamma-Beta notebook.
5 Conjugate prior for
Fink (1997) summarises Miller (1980), which extends Damsleth (1975):
Suppose that data
are independent and identically distributed from a gamma process where both the shape, , and the reciprocal scale, , parameters are unknown. The likelihood function, , proportional to the parameters, is The sufficient statistics are
, the number of data points, , the product of the data, and , the sum of the data. The factors of [the equation] proportional to parameters and make up the kernel of the conjugate prior, . We specify the conjugate prior with hyperparameters […] The posterior joint distribution of
and is specified by the hyperparameters
From this we find the predictive,
Maybe if you really want to have a good conjugate prior relation for a non-negative variate we should consider something a little (but only a little) less messy, such as [inverse Gaussian] (./inverse_gaussian_distribution.qmd) or lognormal distributions.
6 Generalized Gamma Convolution
As noted under divisible distributions, the class of Generalized Gamma Convolution (GGC) is a construction that represents some startling (to me) processes as a certain type of generalization of Gamma distributions. This family includes Pareto (Thorin 1977b) and Lognormal (Thorin 1977b) distributions. Those Thorin papers introduced the idea originally; possibly it is easy to start from one of the textbooks or overviews (Bondesson 2012; James, Roynette, and Yor 2008; Steutel and van Harn 2003; Barndorff-Nielsen, Maejima, and Sato 2006).
AFAICT this allows us to prove lots of nice things about such distributions. It is less easy to get implementable computational methods this way.
The GGC convolves a Gamma distribution with some measure and makes a new divisible distribution. James, Roynette, and Yor (2008):
we say that a positive r.v.
is a generalized gamma convolution - … if there exists a positive Radon measure on such that:
Barndorff-Nielsen, Maejima, and Sato (2006) and Pérez-Abreu and Stelzer (2014) generalize the GGC to vector- and matrix-valued distributions.
7 Parameter estimation
The method of moments is obvious. The Maximum likelihood version is surprisingly fiddly and has no closed form, but a low-bias closed-form approximation is given by Ye and Chen (2017). Wikipedia’s summary:
The estimate for the shape
is
and the estimate for the scale
is
Using the sample mean of
, the sample mean of , and the sample mean of the product simplifies the expressions to:
If the rate parameterization is used, the estimate of
. These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators.
Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scale
is
A bias correction for the shape parameter
is given as (Louzada, Ramos, and Ramos 2019)
A Bayesian update is given with a similar form in Louzada and Ramos (2018) but they have a slightly different parameterisation, so I will need to come back to that when I have time to translate that.
8 Simulating Gamma variates
8.1 Univariate
A Gamma variate can be generated by many methods (Ahrens and Dieter 1974), e.g. a transformed normal and a uniform random variable (Ahrens and Dieter 1982), or two uniforms, depending on the parameter range. Most methods involve a rejection step. Here is Devroye (2006) summary for beta generators for
Johnk’s beta generator
REPEAT Generate iid uniform [0,1] random variates
RETURN
Berman’s beta generator
REPEAT Generate iid uniform [0,1] random variates
RETURN
Johnk’s gamma generator
REPEAT Generate iid uniform [0,1] random variates
Generate an exponential random variate
Berman’s gamma generator
REPEAT Generate iid uniform [0,1] random variates
Generate a gamma ( 2 ) random variate
RETURN