# Markov bridge processes

Especially Lévy bridges, Doob h-transforms

October 15, 2019 — April 20, 2022

\[ \renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}} \renewcommand{\disteq}{\stackrel{d}{=}} \renewcommand{\rv}[1]{\mathsf{#1}} \renewcommand{\vrv}[1]{\vv{\rv{#1}}} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}} \]

A bridge process for some time-indexed Markov process \(\{\rv{x}(t)\}_{t\in[0,T]}\) is obtained from that process by conditioning it to attain a fixed value at the final time \(\rv{x}(T)=Y\) starting from \(\rv{x}(0)=X\), on an interval \([0,T]\). We write that as \(\{\rv{x}(t)\mid \rv{x}(0)=X,\rv{x}(T)=Y\}_{t\in[0,T]}.\) Put another way, given the starting and finishing values of a stochastic markov process, I would like to *rewind time* to find out the values of its path at a midpoint which is “compatible” with the endpoints.^{1} Or, if we jump *back* then *forward* again, we can construct generalised autoregressive processes.

## 1 Lévy bridges

I am mostly interested in bridges for Lévy increment processes in particular.

It is easy to define a bridge for a Lévy process. For simplicity we shall take it to be \(\{\rv{x}(t)\}_{t\in [0,1]}\) for \(s\in(0,1)\) and assume \(\rv{x}(0)=1\). Suppose we wish to find the marginal distribution of one point on the bridge \(\{\rv{x}(s)\mid \rv{x}(1)=Y\}_{s\in(0,1)}.\) We know \(\rv{x}(1)\disteq \rv{x}'(s)+\rv{x}''(1-s)\) where \(\rv{x}'(s)\) is an independent copy of \(\rv{x}\). Also it follows that \(\Ex\rv{x}''(1-s) = (1-s)\Ex\rv{x}(1)\), \(\Ex\rv{x}'(s) = s\Ex\rv{x}(1)\) and \(\left(\rv{x}(s)\mid \rv{x}(1)=Y\right)+\rv{x}'''(1-s)\disteq \rv{x}(1).\) The bridge is a random function mapping \(y\) into the required conditional. TBC

It is trivial (more or less) to derive the properties of the bridge for a Wiener process, so *that* can be found in every stochastic processes textbook. There is an introduction for Lévy processes in (Bertoin 1996, VIII.3). TODO: check that it admits processes with jumps.

Fitzsimmons, Pitman, and Yor (1993) describes a general assortment of methods for handling the properties of bridges, and Perman, Pitman, and Yor (1992) specialises them on pure jump processes (Poisson, gamma).

🏗

### 1.1 When is a Lévy bridge distribution computationally tractable?

I know it is simple for Gamma, Brownian, and Poisson Lévy processes. Are there others? Yor (2007) asserts that among the Lévy processes, only Brownian and Gamma processes have closed form expressions for the bridge. That is not right; a Poisson process has a simple bridge (binomial sub-division of the increment). Also, both Gamma and Brownian can admit a deterministic drift term.

Anyway, there is a *lot* or work on conditioning Browning processes and more generally Gaussian processes; see Gaussian processes, esp. Gaussian processes with pathwise conditioning.

However, I cannot think of other easy examples. Compound Poisson processes, for example, will have a nasty integral term over an unbounded number of dimensions if we do it the obvious way.

Surely there are proofs about this? I have half a mind to see if I cannot find a classification of processes admitting tractable bridges with my rudimentary variational calculus skills.

## 2 Continuous time, discrete-state Markov processes

A classic. TBC.

## 3 General Markov processes

i.e. not just process with stationary increments, but satisfying the Markov property.

Alexandre Hoang Thiery describes Doob’s h-transform method for general Markov processes and notes that while it might be good for deducing certain properties of the paths of such a conditioned process, it is not necessarily a good way of sampling it. Although if you happen to derive an analytic form that way, it is good for both.

For a fairly old and simple concept, work in the area skews surprisingly recent (i.e post 1990), given that applications seem to cite the early, gruelling (Doob 1957) as the foundation; perhaps for about 30 years no-one wanted to learn enough potential theory to make it go? Or perhaps I am missing something because of e.g. terminology drift.

See also

- Toywiki: Doob’s \(h\)-transform is a worked example based on Bloemendal (2010)
- Dominic Yeo’s h-transform posts

## 4 References

*Lévy Processes*. Cambridge Tracts in Mathematics 121.

*Markov Processes, Brownian Motion, and Time Symmetry*. Grundlehren Der Mathematischen Wissenschaften.

*Markov Processes, Brownian Motion, and Time Symmetry*. Grundlehren Der Mathematischen Wissenschaften 249.

*Bulletin de la Société Mathématique de France*.

*Advances in Applied Mathematics*.

*Publications of the Research Institute for Mathematical Sciences*.

*Seminar on Stochastic Processes, 1992*. Progress in Probability.

*Stochastic Analysis and Applications*.

*The Annals of Probability*.

*Eprint arXiv:1112.0220*.

*Markov Chains and Mixing Times*.

*Transactions of the American Mathematical Society*.

*Journal of Physics A: Mathematical and General*.

*Probability Theory and Related Fields*.

*Annales de l’Institut Henri Poincare (B) Probability and Statistics*.

*The Annals of Probability*.

*Infinite Divisibility of Probability Distributions on the Real Line*.

*Statistics and Computing*.

*arXiv:2106.00087 [Math]*.

*Advances in Mathematical Finance*. Applied and Numerical Harmonic Analysis.

## Footnotes

There is a more general version given in (Privault and Zambrini 2004) where the initial and final states of the process are permitted to have distributions rather than fixed values.↩︎