Gamma distributions

October 14, 2019 — July 28, 2024

Lévy processes
probability
stochastic processes
time series
Figure 1: The gammer distribution is a popular model for survival times.

The density g(x|α,λ) of the univariate Gamma is g(x|α,λ)=λαΓ(α)xα1eλx,x0. People refer to this as the shape-rate parameterisation, with rate λ and shape α. I find it easier to remember as logg(x|α,λ)=αlogλ+(α1)logxλxlogΓ(α)

If xGamma(α,λ) then E(x)=α/λ and Var(x)=α/λ2.

The Gamma distribution has lots of neat properties, such as divisibility. Some more are outlined in the Gamma-Dirichlet algebra section.

1 All moments

The moment generating function of the Gamma distribution is E[exp(xs)]=(1sλ)α for s<λ which gives us expressions for all moments fairly easily: E[x]=ddsE[exp(xt)]|s=0=dds(1sλ)α|s=0=αλ(1sλ)α1|s=0=α/λE[x2]=ddsαλ(1sλ)α1|s=0=α2+αλ2(1sλ)α2|s=0=α(α+1)λ2Var[x]=αλ2(handy later)Var[x2]=Γ(α)Γ(α+4)Γ2(α+2)Γ2(α)λ4(handy later)E[x3]=ddsα2+αλ2(1sλ)α2|s=0=α(α+1)(α+2)λ3(1sλ)α2=α(α+1)(α+2)λ3E[x4]=α(α+1)(α+2)(α+3)λ4E[xn]=αnλn

Here αn:=Γ(α+n)Γ(α) is the rising factorial.

2 As exponential family

The gamma distribution is a two-parameter exponential family with natural parameters [θ1,θ2]=[α1,λ], and natural statistics [x,ln(x)]. In particular, it is a natural exponential family with quadratic variance function ().

The log-partition function A(θ) is given by: A(θ)=(θ1+1)log(θ2)+logΓ(θ1+1) So in canonical form the density is f(x;θ1,θ2)=1xexp(θ1log(x)+θ2xA(θ)).

In terms of the natural parameters, the mean and variance are E[x]=θ1+1θ2Var[x]=θ1+1θ22.

2.1 Tempered

A τ-tempering version of the density behaves how we would hope, as an exponential family. Lazily writing fτ for the tempered density (which the pedantic might argue skips the normalising constant), we have f(x;θ1,θ2)τ=(1xexp(θ1log(x)+θ2xA(θ))τ)(1xexp(τθ1log(x)+τθ2x))=f(x;τθ1,τθ2). No surprises there. How this acts on the moments is different than e.g. Gaussians. Exfτ[x]=τθ1+1τθ2=θ1θ2+1τθ2=Exf[x]1τθ2Varxfτ[xτ]=τθ1+1τ2θ22=1ττθ1+1τθ22=1τExfτ[x].

So if we weight the density down by τ<1, the mean decreases as the relative variance increases.

3 Linear combinations of Gammas

Is the Gamma family closed under addition? For fixed scale/rate parameters, yes. See Gamma-Dirichlet algebra.

If we are summing gamma variates which differ in the rate parameter λ, the result is not in general Gamma distributed. It can be compactly expressed in terms of Gamma densities, for what that is worth. If we wish to know the distribution of the sum of a set of arbitrary Gamma random variables with different rate parameters, we can try to approximate the sum with moment matching (; ) if that feels worthwhile.

Note that multiplying a gamma RV by a scalar changes the rate, so gamma variates are not closed under affine combination as Gaussians ones are. The moral is that we cannot assume the convenient algebra of linear combination additivity as in the Gaussian process community. What kind of algebra do we get?

4 Gamma-Dirichlet algebra

There are various operations which give us similar conveniences, however. For those, we need to also be aware of the Dirichlet and Beta distributions. Here are some useful properties, drawn from, or extrapolated from, Dufresne (), Lin (), and Pérez-Abreu and Stelzer () which use those properties.

First, we fix some notation. From here on, all variables denoted xa (for some a>0, with or without superscript) have a Gamma(a,1) distribution; all variables denoted ba,b (for some a,b>0, with or without superscript) have a Betaa,b distribution. In all expressions, the variables xa1,xa2,,ba1,b1,ba2,b2, are independent unless I say otherwise.

4.1 Superposition

xα1+xα2Gamma(α1+α2,λ)

4.2 Multiplication

If xGamma(α,λ) then cxGamma(α,λ/c). This looks useful but in practice few constructions I handle vary λ.

4.3 Beta Thinning

xα1xα1+xα2Beta(α1,α2) independent of xα1+xα2. Equivalently, xα1+α2bα1,α2Gamma(α1) independent of bα1,α2.

The Gamma-bridge construction arises from this thinning procedure.

4.4 Dirichlet thinning

Grab a set of independent Gamma rvs, {xai}i=1,,k, and define s=ixai. We know that sGamma(iai,1). But wait! There is more. Define di=xais. The diDirichlet(α), independently of s.

Conversely, take some arbitrary xA and some dDirichlet(a) with iai=B. Then xAdiGamma(AaiB,λ), and also the random variates xAdi,i=1,,k are jointly independent.

4.5 Beta thickening

Grab a set of independent Gamma rvs, {xai}i=1,,k. If we take a set of Beta rvs with arbitrary dependence, {bκi,aiκi}i=1,,k then the product rvs {(xaibκi,aiκi)Gamma(κi,λ)}i=1,,k are jointly independent Gamma variates. Thus ixaibκi,aiκiGamma(iκi,λ). As a special case, if κiκ, then ixaibκi,aiκiGamma(iκi,λ).

TODO: Check this. Also, is it actually useful? I thought it was for coupling Gamma processes, but it turned out not to be necessary in my construction.

4.6 Other

There are many other nice properties and relations.

The properties I include in this section fail to define a formal algebraic structure, but they do define a bunch of operations that preserve membership of a certain distributional family, or pretty close to. 差不多. We can define the sets and operations if we really need an algebra.

Thematically, the operations that arise most often in this Gamma-“algebra” are not quite the same as in the Gaussian process “algebra”. In that case we are usually concerned with linear algebras in that many linear operations on many objects which are Gaussian in a very broad sense still end up being Gaussian and possessed of a closed-form solution. In this case we are mostly concerned with different operations, addition yes, but also thinning () rather than multiplication.

Yor () talks about the Gamma-Beta algebra of Dufresne () which relates certain Markov chains of Gamma distribution and Beta distributions. Dufresne ()’s construction is a formal algebra, although one that I only pull a couple of trivial cases from. Read that paper for more than the following taster:

For any w,x,y,z>0, bw,xxy+xz=dxy+z(1bx,wby,z). In particular, for any w,x,y>0, bw,x+yxx+xy=xx+ybw+y,x=dxw+yxx+y,w.

For more, see the Gamma-Beta notebook.

5 Conjugate prior for

Fink () summarises Miller (), which extends Damsleth ():

Suppose that data x1,,xn are independent and identically distributed from a gamma process where both the shape, α, and the reciprocal scale, β, parameters are unknown. The likelihood function, L(α,βx1,xn), proportional to the parameters, is L(α,βx1,,xn){Pα1exp(βS)(Γ(α)βα)n where xi>0,i=1n0 otherwise  where S=i=1nxiP=i=1nxi.

The sufficient statistics are n, the number of data points, P, the product of the data, and S, the sum of the data. The factors of [the equation] proportional to parameters α and β make up the kernel of the conjugate prior, π(,). We specify the conjugate prior with hyperparameters p,q,r,s>0 π(α,βp,q,r,s)={1Kpα1exp(βq)Γ(α)rβαs where α,β>00 otherwise. 

[…] The posterior joint distribution of α and β is specified by the hyperparameters p=pPq=q+Sr=r+ns=s+n.

From this we find the predictive, L(x|α,β)π(α,βp,q,r,s)dαdβxα1exp(βx)Γ(α)βαpα1exp(βq)Γ(α)rβαsdαdβ(xp)α1expβ(x+q)Γ(α)r+1βα(x+n)dαdβ The predictive is reasonably intractable e.g. because there is nothing obviously nice to do with the 1/Γ(α) term; even estimating the mean requires approximations.

Maybe if you really want to have a good conjugate prior relation for a non-negative variate we should consider something a little (but only a little) less messy, such as [inverse Gaussian] (./inverse_gaussian_distribution.qmd) or lognormal distributions.

6 Generalized Gamma Convolution

As noted under divisible distributions, the class of Generalized Gamma Convolution (GGC) is a construction that represents some startling (to me) processes as a certain type of generalization of Gamma distributions. This family includes Pareto () and Lognormal () distributions. Those Thorin papers introduced the idea originally; possibly it is easy to start from one of the textbooks or overviews (; ; ; ).

AFAICT this allows us to prove lots of nice things about such distributions. It is less easy to get implementable computational methods this way.

The GGC convolves a Gamma distribution with some measure and makes a new divisible distribution. James, Roynette, and Yor ():

we say that a positive r.v. Γ is a generalized gamma convolution (GGC) - … if there exists a positive Radon measure μ on ]0,[ such that: E[eλΓ]=exp{0(1eλx)dxx0exzμ(dz)}=exp{0log(1+λz)μ(dz)} with: ]0,1]|logx|μ(dx)< and [1,[μ(dx)x<.

Barndorff-Nielsen, Maejima, and Sato () and Pérez-Abreu and Stelzer () generalize the GGC to vector- and matrix-valued distributions.

7 Parameter estimation

The method of moments is obvious. The Maximum likelihood version is surprisingly fiddly and has no closed form, but a low-bias closed-form approximation is given by Ye and Chen (). Wikipedia’s summary:

The estimate for the shape k is

k^=Ni=1NxiNi=1Nxilnxii=1Nxii=1Nlnxi

and the estimate for the scale θ is

θ^=1N2(Ni=1Nxilnxii=1Nxii=1Nlnxi)

Using the sample mean of x, the sample mean of lnx, and the sample mean of the product xlnx simplifies the expressions to:

k^=x¯/θ^ θ^=xlnxx¯lnx.

If the rate parameterization is used, the estimate of β^=1/θ^.

These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators.

Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scale θ is

θ~=NN1θ^

A bias correction for the shape parameter k is given as ()

k~=k^1N(3k^23(k^1+k^)45k^(1+k^)2)

A Bayesian update is given with a similar form in Louzada and Ramos () but they have a slightly different parameterisation, so I will need to come back to that when I have time to translate that.

8 Simulating Gamma variates

8.1 Univariate

A Gamma variate can be generated by many methods (), e.g. a transformed normal and a uniform random variable (), or two uniforms, depending on the parameter range. Most methods involve a rejection step. Here is Devroye () summary for beta generators for 0<a,b1 and Gamma generators for a<1:

Johnk’s beta generator

REPEAT Generate iid uniform [0,1] random variates u,v

xu1a,yv1b UNTL x+y1

RETURN xx+y

(x is beta (a,b) distributed )

Berman’s beta generator

REPEAT Generate iid uniform [0,1] random variates u,v

xu14,yv1b UNTIL x+y1

RETURN x

(x is beta (a,b+1) distributed )

Johnk’s gamma generator

REPEAT Generate iid uniform [0,1] random variates u,v

x+u1a,yv11a UNTL x+y1

Generate an exponential random variate E RETURN Exx+y

(x is gamma ( a ) distributed)

Berman’s gamma generator

REPEAT Generate iid uniform [0,1] random variates u,v

xu14,yv11a UNTL x+y1

Generate a gamma ( 2 ) random variate z (either as the sum of two iid exponential random variates or as log(uv) where u,v are lid uniform [0,1] random variates

RETURN zx

(x is gamma (a) distributed )

9 References

Ahrens, and Dieter. 1974. Computer Methods for Sampling from Gamma, Beta, Poisson and Bionomial Distributions.” Computing.
———. 1982. Generating Gamma Variates by a Modified Rejection Technique.” Communications of the ACM.
Barndorff-Nielsen, Maejima, and Sato. 2006. Some Classes of Multivariate Infinitely Divisible Distributions Admitting Stochastic Integral Representations.” Bernoulli.
Bondesson. 1979. A General Result on Infinite Divisibility.” The Annals of Probability.
———. 2012. Generalized Gamma Convolutions and Related Classes of Distributions and Densities. Lecture Notes in Statistics 76.
Damsleth. 1975. Conjugate Classes for Gamma Distributions.” Scandinavian Journal of Statistics.
Devroye. 1986. Non-uniform random variate generation.
———. 2006. Nonuniform Random Variate Generation.” In Simulation. Handbooks in Operations Research and Management Science.
Dufresne. 1998. Algebraic Properties of Beta and Gamma Distributions, and Applications.” Advances in Applied Mathematics.
Fink. 1997. A Compendium of Conjugate Priors.”
James, Roynette, and Yor. 2008. Generalized Gamma Convolutions, Dirichlet Means, Thorin Measures, with Explicit Examples.” Probability Surveys.
Lin. 2016. “On The Dirichlet Distribution.”
Louzada, and Ramos. 2018. Efficient Closed-Form Maximum a Posteriori Estimators for the Gamma Distribution.” Journal of Statistical Computation and Simulation.
Louzada, Ramos, and Ramos. 2019. A Note on Bias of Closed-Form Estimators for the Gamma Distribution Derived From Likelihood Equations.” The American Statistician.
Mathai. 1982. Storage Capacity of a Dam with Gamma Type Inputs.” Annals of the Institute of Statistical Mathematics.
Miller. 1980. Bayesian Analysis of the Two-Parameter Gamma Distribution.” Technometrics.
Morris. 1982. Natural Exponential Families with Quadratic Variance Functions.” The Annals of Statistics.
———. 1983. Natural Exponential Families with Quadratic Variance Functions: Statistical Theory.” The Annals of Statistics.
Morris, and Lock. 2009. Unifying the Named Natural Exponential Families and Their Relatives.” The American Statistician.
Moschopoulos. 1985. The Distribution of the Sum of Independent Gamma Random Variables.” Annals of the Institute of Statistical Mathematics.
Pérez-Abreu, and Stelzer. 2014. Infinitely Divisible Multivariate and Matrix Gamma Distributions.” Journal of Multivariate Analysis.
Rezende, Mohamed, and Wierstra. 2015. Stochastic Backpropagation and Approximate Inference in Deep Generative Models.” In Proceedings of ICML.
Roychowdhury, and Kulis. 2015. Gamma Processes, Stick-Breaking, and Variational Inference.” In Artificial Intelligence and Statistics.
Steutel, and van Harn. 2003. Infinite Divisibility of Probability Distributions on the Real Line.
Thorin. 1977a. On the Infinite Divisbility of the Pareto Distribution.” Scandinavian Actuarial Journal.
———. 1977b. On the Infinite Divisibility of the Lognormal Distribution.” Scandinavian Actuarial Journal.
Ye, and Chen. 2017. Closed-Form Estimators for the Gamma Distribution Derived From Likelihood Equations.” The American Statistician.
Yor. 2007. Some Remarkable Properties of Gamma Processes.” In Advances in Mathematical Finance. Applied and Numerical Harmonic Analysis.