\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}}\]

A bridge process for some time-indexed Markov process \(\{\Lambda(t)\}_{t\in[S,T]}\)
is obtained from that process by conditioning it to attain a fixed value
\(\Lambda(T)=Y\)
starting from \(\Lambda(S)=X\) on some interval \([S,T]\).
We write that as \(\{\Lambda(t)\mid \Lambda(S)=X,\Lambda(T)=Y\}_{t\in[S,T]}.\)
Put another way, given the starting and finishing values of a stochastic markov process, I would like to *rewind time* to find out the values of its path at a midpoint which is “compatible” with the endpoints.^{1}

I am mostly interested in this for Lévy processes in particular, rather than arbitrary Markov processes. There is an introduction for these in (Bertoin 1996, VIII.3) but I an not sure that it permits processes with jumps.

For a fairly old and simple concept, work in the area skews surprisingly recent (i.e post 1990), given that applications seem to cite the early, gruelling (Doob 1957) as the foundation; perhaps for about 30 years no-one wanted to learn enough potential theory to make it go.

It is trivial (more or less) to derive the properties of the bridge for a Wiener process, so *that* can be found in every stochastic processes textbook.

Alexandre Hoang Thiery describes Doob’s h-transform method and notes that while it might be good for deducing certain properties of the paths of such a conditioned process it is not necessarily a good way of sampling it.

(Fitzsimmons, Pitman, and Yor 1993) give us a fairly general bunch of methods for handling the properties of bridges, and (Perman, Pitman, and Yor 1992) give similar tools for pure-jump processes (Poisson, gamma).

🏗

## References

*Lévy Processes*. Cambridge Tracts in Mathematics 121. Cambridge ; New York: Cambridge University Press.

*Markov Processes, Brownian Motion, and Time Symmetry*, 320–35. Grundlehren Der Mathematischen Wissenschaften. New York, NY: Springer. https://doi.org/10.1007/0-387-28696-9_11.

*Markov Processes, Brownian Motion, and Time Symmetry*. 2nd ed. Grundlehren Der Mathematischen Wissenschaften 249. Berlin ; New York: Springer.

*Journal of Physics A: Mathematical and General*36 (12): 3049–66. https://doi.org/10.1088/0305-4470/36/12/312.

*Bulletin de La Société Mathématique de France*85: 431–58. https://eudml.org/doc/urn:eudml:doc:86928.

*Publications of the Research Institute for Mathematical Sciences*40 (3, 3): 669–88. https://doi.org/10.2977/prims/1145475488.

*Seminar on Stochastic Processes, 1992*, edited by E. Çinlar, K. L. Chung, M. J. Sharpe, R. F. Bass, and K. Burdzy, 101–34. Progress in Probability. Boston, MA: Birkhäuser Boston. https://doi.org/10.1007/978-1-4612-0339-1_5.

*The Annals of Probability*16 (2): 620–41. https://doi.org/10.1214/aop/1176991776.

*Markov Chains and Mixing Times*. Providence, R.I: American Mathematical Society.

*Transactions of the American Mathematical Society*355 (9): 3669–97. https://doi.org/10.1090/S0002-9947-03-03226-4.

*Probability Theory and Related Fields*92 (1, 1): 21–39. https://doi.org/10.1007/BF01205234.

*Annales de l’Institut Henri Poincare (B) Probability and Statistics*40 (5, 5): 599–633. https://doi.org/10.1016/j.anihpb.2003.08.001.

There is a more general version given in (Privault and Zambrini 2004) where the initial and final states of the process are permitted to have distributions rather than fixed values.↩︎