# Generalized Galton-Watson processes

This needs a better intro, but the Galton-Watson process is the archetype here.

There are many standard expositions. Two good ones:

• Gesine Reinert’s Introduction to Branching Processes: Parts 1 and 2.

• Steven Lalley’s intro.

Working through some generalisations of the Galton-Watson process as an INAR process. That is, this is something like the Galton-Watson process, but

Consider

• van Harn & Steutel’s work on “F-stable branching processes.” Also bounded influence kernel?

• Lee, Hopcraft, Jakeman and Williams on discrete stable processes. Discrete state, continuous time - How do these differ from the usual Hawkes processes, if at all?

## Long Memory Galton-Watson

For my own edification and amusement I would like to walk through the construction of a particular analogue of the continuous time Hawkes point process on a discrete index set.

Specifically, a non-Markovian generalisation of the Galton-Watson process which still operates in quantised time, but has interesting, possibly-unbounded influence kernels, like the Hawkes process.

I denote a realisation of the process $$\{N_t\}_{t\in\mathbb{N}}$$. and the associated non-negative increment process $$\{X_t\}\equiv\{N_t-N_{t-1}\}$$ and a conditional non-negative pseudo-intensity process $$\lambda_t\equiv g(\{N_s\}_{s < t})$$, adapted to the whole history $$\{N_s\}_{s < t}$$. By “pseudo-intensity” I mean that the innovation law $$X_t\sim\mathcal{L}_t$$ is parameterised (solely, for now) by some scalar-valued process $$\lambda_t(\mathcal{F}(X_t))$$. That is, $$\{X_t\}|\{N_s\}_{s < t}\sim \mathcal{L}(\lambda_t)$$. For the moment I will take this be Poisson. To complete the analogy with the Hawkes process I choose the dependence on the past values of the process linear with influence kernel $$\phi$$: This is also close to clustering, and indeed there are lots of papers noticing the connection.

$\lambda_t\equiv \phi * X$

Then a linear conditional intensity process $$\lambda_t$$ would be

$\lambda_t := \mu + \eta\sum_{0 \leq s <t} \phi(s-t-1)N_s$

The $$-1$$ in $$\phi(s-t-1)$$ is to make sure our influence kernel is defined on $$\mathbb{N}_0$$, which is convenient for typical count distribution functions.

If the kernel has bounded support such that

$s>p\Rightarrow\phi(s)=0$

then we have an autoregressive count process of order p. More on that in a moment.

What influence kernel shape will we use?

Geometric distributions are natural, although it doesn’t have to be strictly monotonic, or even unimodal. Poisson or negative binomial would also work. We could in general give any arbitrary probability mass function as influence kernel, or use a nonparametric form.

$\phi_\text{Exp}(i) = \sum_{0 \leq k <K} b_ke^{a_ki}$

for some $$\{a_k, b_k\}$$.

If we expect to be using sparsifying lasso penalties for such a kernel we probably want to decompose the kernel in a way that minimises correlation between mixture components to improve our odds of correctly identifying dependency at different scales. If we constrain our distributions to be positive the only way to do this is for them to be completely orthogonal is to have disjoint support.

Intermediately, we could choose a Poisson mixture

$\phi_\text{Pois}(i) = \sum_{0 \leq k <K} \frac{a_k^i}{i!} e^{-a_k}$

There is a subtlety here with regard to the filtration - do we set up the kernel strictly to regard triggering events at previous timesteps? If so, no problem. If we want to allow same-day triggering, we might allow the exogenous events to also contribute to the kernel, in which case we might have to estimate an extra influence parameter, or find some principled way to include it in the kernel weights.

🏗 unconditional distribution using, e.g. generator fns.

## Autoregressive characterisation

Steutel and van Harn characterised this process in 1979 - see (Wait - is this strictly true, that we can make this go with a thinning operator? Many related definitions here, muddying the waters)

We need their binomial thinning operator $$\odot$$, which is defined for some count RV $$X$$ by

$\alpha\odot X = \sum_{i=1}^X N_i$

for $$N_i$$ independent $$\operatorname{Bernoulli}(\alpha)$$ RVs.

In terms of generating functions,

$$G_{\alpha\odot X}(s)=G_{X}(1-\alpha+\alpha s)$$

There are many generalisation of this operator - see for an overview.

Anyway, you can use this thinning operator to construct an autoregressive time series model driven by thinned versions of its history.

(Maybe it would be simpler to use Fokkianos’ GLM characterisation? I think they are equivalent or nearly equivalent in ths case - certainly with stable distributions they are.)

## Estimation of parameters

Well studied for finite-order GINAR(p) processes.

## Influence kernels

Hardiman et al propose multiple-scale exponential kernels to simultaneously estimate decays and branching ratios Bacry et al 2012 have a related nonparametric method based on estimating the kernel in the spectral domain. Convergence properties are unclear.

We are also free to use a sum-of-exponentials kernel, possibly calculating the branching ratio from that alone, and some measure of tail-heaviness from that.

Possibly Smooth-lasso (penalises component CHANGE)

## Endo-exo models

Note that we can still recover the endo-exo model with this by simply calculating the projected ratio between exogenous and endogenous events. It would be interesting to derive the properties of this as a single parameter of interest.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.