\(\newcommand{\rv}[1]{\mathsf{#1}}\)

Models for, loosely, the total population size arising from all generations the offspring of some progenitor.

Let us suppose that each individual \(i\) who catches a certain strain of influenza will go on to infect a further \(\rv{n}_i\sim F\) others. Assume the population is infinite, that no one catches influenza twice and that the number of transmission of the disease is distributed the same for everyone who catches it. How many people, will ultimately catch the influenza, starting from one infected person?

The Galton-Watson version of this model considers this byu generation; We write \(\rv{x}(k)=\sum_{i \in k\text{th generation}} \rv{n}_i\) for the number of people infected in the \(k\)th generation. Writing \(F^{*k}\) for the \(k\)-fold convolution of \(F\), we have \[\rv{x}(k) \sim F^{\ast \rv{x}(k-1)}\] The sum over all these \[\sum_k \rv{x}(k)\] is the cascade size.

A type of count model for a Markov stochastic pure-birth branching process.

I say it is a count model, but it turns out there are continuous-state generalisations. See, e.g. (Burridge 2013a, 2013b).

The distribution of subcritical processes are sometimes tedious to calculate, although we can get a nice form for the generating function of a geometric offspring distribution cascade process.

Set \(\frac{1}{\lambda+1}=p\) and \(q=1-p\). We write \(G^{n}\equiv G\cdot G\cdot \dots \cdot G\cdot G\) for the \(n\)-fold composition of \(G\). Then the (non-critical) geometric offspring distribution branching process obeys the identity

\[ 1-G^n(s;\lambda) = \frac{\lambda^n(\lambda-1)(1-s)}{\lambda(\lambda^n-1)(1-s)+\lambda-1} \]

This can get us a formula for the first two factorial moments, and hence the vanilla moments and thus mean and variance etc

More generally the machinery of Lagrangian distributions is all we need to analyse these.

Maybe I should use (Dwass 1969) to get the moments? Dominic Yeo has a great explanation as always.

π π π

## Lagrangian distributions

A clade of count distributions, which I would call βcascade size distributionβ. For now, letβs get to the interesting new ones contained in this definition. They are unified to my mind by modelling cascade size of cluster processes. Specifically, if I have a given initial population and a given offspring distribution for some population ofβ¦ thingsβ¦ a Lagrangian distribution gives me a model for the size of the total population. There are other interpretations of course (queueing is very popular), but this one is extremely useful for me. See (P. C. Consul and Shoukri 1988; P. C. Consul and Famoye 2006 Ch 6.2) for a deep dive on this. They introduce various exponential_families via the pgf, which is powerful and general, although it does obscure a lot of simplicity and basic workaday mathematics where the forms of the mass functions do in fact turn out to be easy.

Terminology:
the total cascade size of a
subcritical branching process has a βdelta Lagrangianβ or βgeneral Lagrangianβ
distribution, depending on whether the cluster has, respectively, a deterministic or random starting population.
We will define the *offspring* distribution of such a branching process as
\(G\sim G_Y(\eta, \alpha)\).
Usually we also assume \(EG:=\eta< 1\), because otherwise the cascade size is infinite.

### Borel-Tanner distribution

A delta Lagrangian distribution, the Borel distribution is the distribution of a cascade size starting from a population size of \(k=1\). We can generalize it to \(k>\), in which case it is the Borel-Tanner distribution.

- Spelled
- \(\operatorname{Borel-Tanner}(k,\eta)\)
- Pmf
- ((X=x;k,)={}{}}
- Mean
- \(\frac{k}{1-\eta}\)
- Variance
- \(\frac{k\eta}{(1-\eta)^3}\)

Note to self: Wikipedia mentions an intriguing-sounding correspondence with random walks, which I should follow up Dwass (1969).

The only R implementation I could find for this is in VGAM, although it is not so complicated.

### Poisson-Poisson Lagrangian

See (P. C. Consul and Famoye 2006 Ch 9.3). Also known as the Generalised Poisson, although there are many things called that.

- Spelled
- \(\operatorname{GPD}(\mu,\eta)\)
- Pmf
- \(\mathbb{P}(X=x;\mu,\eta)=\frac{\mu(\mu+ \eta x)^{x-1}}{x!e^{\mu+x\eta}}\)
- Mean
- \(\frac{\mu}{1-\eta}\)
- Variance
- \(\frac{\mu}{(1-\eta)^3}\)

Returning to the cascade interpretation: Suppose we have

- an
*initial population*is distributed \(\operatorname{Poisson}(\mu\)) - and everyone in the population has a number of
*offspring*distributed \(\operatorname{Poisson}(\eta\)).

Then the total population is distributed as \(\operatorname{GPD}(\mu, \eta)\).

Notice that this can produce long tails, in the sense that it can have a large variance with finite mean, but not heavy tails, in the sense of the variance becoming infinite while retaining a finite mean; both variance and expectation go to infinity together.

Here, I implemented the GPD for you in python. There are versions for R, presumably. A quick search turned up RMKDiscrete and LaplacesDemon.

### General Lagrangian distribution

A larger family of Lagrangian distributions (the largest?) family is summarised in (P. Consul and Shenton 1972), in an unintuitive (for me) way.

One parameter: a differentiable (infinitely differentiable?) function, not necessarily a pgf, \(g: [0,1]\rightarrow \mathbb{R}\) such that \(g(0)\neq 0\text{ and } g(1)=1\). Now we define a pgf \(\psi(s)\) implicitly as the smallest root of the Lagrange transformation \(z=sg(z)\). The paradigmatic example of such a function is \(g:z\mapsto 1βp+pz\); letβs check how this fella out.

π

- Spelled
- ?
- Pmf
- ?
- Mean
- ?
- Variance
- ?

## References

*IEEE Transactions on Information Theory*62 (4): 2184β2202.

*Communications in Partial Differential Equations*14 (4): 867β93.

*arXiv:1304.3741 [Math]*, April.

*Physical Review E*88 (3): 032124.

*Wiley StatsRef: Statistics Reference Online*. John Wiley & Sons, Ltd.

*Communications in Statistics - Theory and Methods*21 (1): 89β109.

*Lagrangian Probability Distributions*. Boston: BirkhΓ€user.

*Statistics*20 (3): 407β15.

*Communications in Statistics*2 (3): 263β72.

*Communications in Statistics - Theory and Methods*13 (12): 1533β47.

*American Journal of Mathematical and Management Sciences*8 (1-2): 181β202.

*SIAM Journal on Applied Mathematics*23 (2): 239β48.

*Journal of Applied Probability*6 (3): 682β86.

*Biometrika*47 (1-2): 143β50.

*The Annals of Probability*30 (3): 1223β37.

*Communications in Statistics - Theory and Methods*45 (3): 712β21.

*Proceedings of the Physical Society. Section A*63 (10): 1101.

*Proceedings of the Royal Irish Academy. Section A: Mathematical and Physical Sciences*54: 245β62.

*Journal of Probability and Statistical Science*8 (1): 113β23.

*Proceedings of the Physical Society. Section A*65 (7): 465.

*Proceedings of the Physical Society. Section A*65 (10): 854.

*Proceedings of the 25th ACM International Conference on Information and Knowledge Management*, 1069β78. CIKM β16. New York, NY, USA: ACM.

*Aequationes Mathematicae*49 (1): 57β85.

*SankhyΔ: The Indian Journal of Statistics, Series A (1961-2002)*27 (2/4): 249β58.

*Annals of Mathematics*49 (3): 583β99.

*The Annals of Mathematical Statistics*20 (2): 206β24.

*Journal of Applied Probability*54 (3): 905β20.

*Journal of Physics D: Applied Physics*20 (2): 151.

*Microsurveys in Discrete Probability*, edited by David Aldous and James Propp. Vol. 41. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. Providence, Rhode Island: American Mathematical Society.

*Proceedings of the Indian Academy of Sciences-Section A*, 44:263β73. Springer.

*World Wide Web 2017, International Conference on*, 1β9. WWW β17. Perth, Australia: International World Wide Web Conferences Steering Committee.

*Biostatistics*, edited by Ian B. MacNeill, Gary J. Umphrey, Allan Donner, and V. Krishna Jandhyala, 259β68. Dordrecht: Springer Netherlands.

*Journal of Applied Probability*31: 185β97.

*Biometrika*48 (1-2): 222β24.

## No comments yet. Why not leave one?