lévy_processes on Dan MacKinlay
https://danmackinlay.name/tags/l%C3%A9vy_processes.html
Recent content in lévy_processes on Dan MacKinlayHugo -- gohugo.ioen-usThu, 24 Jun 2021 08:39:14 +1000Stochastic partial differential equations
https://danmackinlay.name/notebook/spdes.html
Thu, 24 Jun 2021 08:39:14 +1000https://danmackinlay.name/notebook/spdes.htmlAs SDEs taking values in a Banach space References Placeholder, for the multidimensional PDE version of SDEs.
This picture of ice floes on the Bering shelf looks like it might be some kinda stochastic PDE thing, right?
So, how do I handle these?
As SDEs taking values in a Banach space Keywords: Q-Wiener process, Cylindrical wiener-process, Banach-space-valued process.
A textbook that was recommended to me by Thomas Scheckter was Liu and Röckner (2015), which seems to have same blurb as the older @Prévôt and Röckner (2007).Gaussian processes
https://danmackinlay.name/notebook/gaussian_processes.html
Wed, 23 Jun 2021 15:19:40 +1000https://danmackinlay.name/notebook/gaussian_processes.htmlReferences “Gaussian Processes” are stochastic processes/fields with jointly Gaussian distributions of observations. The most familiar of these to many of us is the Gauss-Markov process, a.k.a. the Wiener process, but there are many others. These processes are convenient due to certain useful properties of the multivariate Gaussian distribution e.g. being uniquely specified by first and second moments, nice behaviour under various linear operations, kernel tricks…. Especially neat applications include Gaussian process regression and spatial statistics.Stochastic differential equations
https://danmackinlay.name/notebook/stochastic_differential_equations.html
Tue, 22 Jun 2021 12:38:44 +1000https://danmackinlay.name/notebook/stochastic_differential_equations.htmlPathwise solutions Wong-Zakai Infinite dimensional References By analogy with differential equations, which use vanilla calculus to define deterministic dynamics, we can define stochastic differential equations, which use stochastic calculus to define random dynamics.
SDEs are time-indexed, causal stochastic processes which notionally integrate an ordinary differential equation over some driving noise. SPDEs are to SDEs as PDEs are to ODEs.
Useful in state filters, optimal control, financial mathematics etc.Stochastic calculus
https://danmackinlay.name/notebook/stochastic_calculus.html
Mon, 21 Jun 2021 10:18:30 +1000https://danmackinlay.name/notebook/stochastic_calculus.htmlItô Integral Itô’s lemma Stratonovich Integral Doss-Sussman transform Paley-Wiener integral References Calculus that works, in a certain sense, for random objects, of certain types. Very popular for stochastic differential equations. This is a popular and well-explored tool; notes here are not supposed to be tutorial; I simply want to maintain a list of useful definitions because the literature gets messy and ambiguous sometimes.
Itô Integral TBD.Transforms of RVs
https://danmackinlay.name/notebook/transforms_of_rvs.html
Fri, 14 May 2021 11:47:34 +1000https://danmackinlay.name/notebook/transforms_of_rvs.htmlTaylor expansion Unscented transform Stein’s lemma Stochastic Itô-Taylor expansion References I have a nonlinear transformation of a random process. What is its distribution?
Related: What is the gradient of the transform? That is the topic of the reparameterization trick.
Taylor expansion Not complicated but subtle (Gustafsson and Hendeby 2012).
Consider a general nonlinear differentiable transformation \(g\) and its second order Taylor expansion. Consider the mapping \(g:\mathbb{R}^{n_{x}}\to\mathbb{R}^{n_{z}}\) applied to a variable \(x,\) defining \(z:=g(x).Stochastic Taylor expansion
https://danmackinlay.name/notebook/stochastic_taylor_expansion.html
Fri, 14 May 2021 11:37:27 +1000https://danmackinlay.name/notebook/stochastic_taylor_expansion.htmlReferences Placeholder, for discussing the Taylor expansion equivalent for an SDE.
Let \(f\) denote a smooth function. Then, using Itô’s lemma, we may construct a local approximation by \[ f\left(X_{t}\right)=f\left(X_{0}\right)+\int_{s=0}^{t} L^{0} f\left(X_{s}\right) d s+\int_{s=0}^{t} L^{1} f\left(X_{s}\right) d B_{s} \] where the operators \(L^{0}\) and \(L^{1}\) are defined by \[ L^{0}=a(x) \frac{\partial}{\partial x}+\frac{1}{2} b(x)^{2} \frac{\partial^{2}}{\partial x^{2}} \quad \text { and } \quad L^{1}=b(x) \frac{\partial}{\partial x} \] We may notionally repeat this procedure arbitrarily many times.Chaos expansions
https://danmackinlay.name/notebook/chaos_expansion.html
Mon, 15 Feb 2021 10:53:01 +1100https://danmackinlay.name/notebook/chaos_expansion.htmlPolynomial chaos expansion “Generalized” chaos expansion Arbitrary chaos expansion References Placeholder, for a topic which has a slightly confusing name. To explore: Connection to/difference from other methods of keeping track of evolution of uncertainty in dynamical systems. C&C Gaussian process regression as used in Gratiet, Marelli, and Sudret (2016), functional data analysis etc.
This is not the same thing as chaos in the sense of the deterministic chaos made famous by dynamical systems theory and fractal t-shirts.Nonparametrically learning dynamical systems
https://danmackinlay.name/notebook/nn_learning_dynamics.html
Tue, 08 Dec 2020 13:05:58 +1100https://danmackinlay.name/notebook/nn_learning_dynamics.htmlQuestions Tools References Learning stochastic differential equations. Related: Analysing a neural net itself as a dynamical system, which is not quite the same but crosses over. Variational state filters.
A deterministic version of this problem is what e.g. the famous Vector Institute Neural ODE paper (Chen et al. 2018) did. Author Duvenaud argues that in some ways the hype ran away with the Neural ODE paper, and credits CasADI with the innovations here.Gamma processes
https://danmackinlay.name/notebook/gamma_processes.html
Tue, 13 Oct 2020 15:13:34 +1100https://danmackinlay.name/notebook/gamma_processes.htmlGamma distribution Moments Multivariate gamma distribution with dependence Gamma superpositions The Gamma process Gamma bridge Time-warped gamma process Matrix gamma processes Centred gamma process As a Lévy process Gradients Gamma random field References \[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\rv}[1]{\mathsf{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]
Gamma processes provide the classic subordinator models, i.e. non-decreasing Lévy processes. By “gamma process” in fact I mean specifically a Lévy process with gamma increments.Lévy processes
https://danmackinlay.name/notebook/levy_processes.html
Sat, 25 Jul 2020 10:27:21 +1000https://danmackinlay.name/notebook/levy_processes.htmlGeneral form Intensity measure Subordinators Spectrally negative Martingales Sparsity properties Bridge processes Recommended readings References \(\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}}\)
Stochastic processes with i.i.d. increments over disjoint intervals of the same length, i.e. which arise from divisible distributions. Specific examples of interest include Gamma processes, Brownian motions, certain branching processes, non-negative processes…
Let’s start with George Lowther:
Continuous-time stochastic processes with stationary independent increments are known as Lévy processes.Nonparametrically learning spatiotemporal systems
https://danmackinlay.name/notebook/nn_spatiotemporal.html
Thu, 02 Apr 2020 17:26:01 +1100https://danmackinlay.name/notebook/nn_spatiotemporal.htmlReferences On learning stochastic partial differential equations and other processes using neural networks, gaussian processes and other differentiable techniques. Uses the tools of dynamical NNs and their ilk. Probably handy for machine learning physics.
I know little about this yet. But here are some links
References Arridge, Simon, Peter Maass, Ozan Öktem, and Carola-Bibiane Schönlieb. 2019. “Solving Inverse Problems Using Data-Driven Models.” Acta Numerica 28 (May): 1–174.Potential theory in probability
https://danmackinlay.name/notebook/potential_theory_probability.html
Wed, 12 Feb 2020 09:57:03 +1100https://danmackinlay.name/notebook/potential_theory_probability.htmlReferences Placeholder. I am unfamiliar with potential theory a thing in itself. I keep running in to it, in Markov stochastic proccesses and in graphical models and would like to know that I understand the tools I am using properly. Some at least of the results seems to be terminological updates of words I know already, other perhaps not.
References Doyle, Peter G, and J Laurie Snell.Branching processes
https://danmackinlay.name/notebook/branching_processes.html
Fri, 07 Feb 2020 17:33:31 +1100https://danmackinlay.name/notebook/branching_processes.htmlTo learn We do not care about time Discrete index, discrete state, Markov: The Galton-Watson process Continuous index, discrete state: the Hawkes Process Continuous index, continuous state Parameter estimation Discrete index, continuous state Special issues for multivariate branching processes Classic data sets Implementations References A diverse class of stochastic models that I am mildly obsessed with, where over some index set (usually time, space or both) there are distributed births of some kind, and we count the total population.Infinitesimal generators
https://danmackinlay.name/notebook/infinitesimal_generators.html
Wed, 05 Feb 2020 09:33:10 +1100https://danmackinlay.name/notebook/infinitesimal_generators.htmlReferences At first I found it hard to visualise infinitesimal generators but perhaps this simple diagram will help
\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]
This note exists because no one explained to me satisfactorily to me why I should care about infinitesimal generators. These mysterious creatures pop up in the study of certain continuous time Markov processes, such as stochastic differential equations driven by Lévy noise.Poisson processes
https://danmackinlay.name/notebook/poisson_processes.html
Wed, 29 Jan 2020 10:56:30 +1100https://danmackinlay.name/notebook/poisson_processes.htmlBasics Poisson distribution Moments Poisson bridge Hitting time Taylor expansion References \[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]
Poisson processes are possibly the simplest subordinators, i.e. non-decreasing Lévy processes. They pop up everywhere, especially as a representation of point processes, and as the second continuous-time stochastic process anyone learns after the Brownian motion.
Basics A Poisson process \(\{\mathsf{n}\}\sim \operatorname{PoisP}(\lambda)\) is a stochastic process whose inter-occurrence times are identically and independently distributed such that \(\mathsf{t}_i-\mathsf{t}_{i-1}\sim\operatorname{Exp}(\lambda)\) (rate parameterization).Divisibility, decomposability, stability
https://danmackinlay.name/notebook/divisible_distributions.html
Tue, 28 Jan 2020 12:48:19 +1100https://danmackinlay.name/notebook/divisible_distributions.htmlInfinitely divisible Decomposable Self-decomposable Stable Induced processes References 🏗 all of these are about sums; but presumably we can construct this over other algebraic structures of distributions, e.g. max-stable processes.
For now, some handy definition disambiguation.
Infinitely divisible The Lévy process quality.
A probability distribution is infinitely divisible if it can be expressed as the probability distribution of the sum of any arbitrary natural number of independent and identically distributed random variables.Markov bridge processes
https://danmackinlay.name/notebook/bridge_processes.html
Mon, 20 Jan 2020 16:33:25 +1100https://danmackinlay.name/notebook/bridge_processes.htmlReferences \[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}}\]
A bridge process for some time-indexed Markov process \(\{\Lambda(t)\}_{t\in[S,T]}\) is obtained from that process by conditioning it to attain a fixed value \(\Lambda(T)=Y\) starting from \(\Lambda(S)=X\) on some interval \([S,T]\). We write that as \(\{\Lambda(t)\mid \Lambda(S)=X,\Lambda(T)=Y\}_{t\in[S,T]}.\) Put another way, given the starting and finishing values of a stochastic markov process, I would like to rewind time to find out the values of its path at a midpoint which is “compatible” with the endpoints.