dynamical_systems on Dan MacKinlay
https://danmackinlay.name/tags/dynamical_systems.html
Recent content in dynamical_systems on Dan MacKinlayHugo -- gohugo.ioen-usFri, 09 Apr 2021 16:15:20 +0800Prediction processes
https://danmackinlay.name/notebook/prediction_processes.html
Fri, 09 Apr 2021 16:15:20 +0800https://danmackinlay.name/notebook/prediction_processes.htmlReferences Placeholder. idk really, but Cosma Shalizi has opinions on unifying some interesting ideas in this area using chains with complete connections. Maybe related (?) predictive processing as a model of the mind.
References Blasques, F., S. J. Koopman, and A. Lucas. 2015. “Information-Theoretic Optimality of Observation-Driven Time Series Models for Continuous Responses.” Biometrika 102 (2): 325–43. https://doi.org/10.1093/biomet/asu076. Cox, D. R., Gudmundur Gudmundsson, Georg Lindgren, Lennart Bondesson, Erik Harsaae, Petter Laake, Katarina Juselius, and Steffen L.Dynamical systems via Koopman operators
https://danmackinlay.name/notebook/koopmania.html
Fri, 09 Apr 2021 11:46:21 +0800https://danmackinlay.name/notebook/koopmania.htmlReferences NB: Koopman here is B.O. Koopman (Koopman 1931) not S.J. Koopman, who also works in dynamical systems.
I do not know how this works, but maybe this fragment of abstract will do for now (Budišić, Mohr, and Mezić 2012):
A majority of methods from dynamical system analysis, especially those in applied settings, rely on Poincaré’s geometric picture that focuses on “dynamics of states.Signatures of rough paths
https://danmackinlay.name/notebook/signature_rough_paths.html
Fri, 02 Apr 2021 08:22:53 +1100https://danmackinlay.name/notebook/signature_rough_paths.htmlReferences I am not sure yet. Some kind of encoding of signals which is somewhere between sampling theory, rough SDEs and integral transforms.
References Bonnier, Patric, Patrick Kidger, Imanol Perez Arribas, Cristopher Salvi, and Terry Lyons. 2019. “Deep Signature Transforms.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc. http://arxiv.org/abs/1905.08494. Chevyrev, Ilya, and Andrey Kormilitzin. 2016. “A Primer on the Signature Method in Machine Learning.Neural nets with implicit layers
https://danmackinlay.name/notebook/nn_implicit.html
Mon, 15 Mar 2021 12:16:50 +1100https://danmackinlay.name/notebook/nn_implicit.htmlReferences A unifying framework for various networks, including neural ODEs, where our layers are not simple forward operations but who exacluation is represented as some optimisation problem.
For some info see the NeurIPS 2020 tutorial, Deep Implicit Layers - Neural ODEs, Deep Equilibirum Models, and Beyond, by Zico Kolter, David Duvenaud, and Matt Johnson.
NB: This is different to the implicit representation method. Since implicit layers and implicit representation layers also occur in the same problems (such as ML PDES this terminological confusion will haunt us.Orthonormal and unitary matrices
https://danmackinlay.name/notebook/orthonormal_matrices.html
Thu, 11 Mar 2021 13:59:33 +1100https://danmackinlay.name/notebook/orthonormal_matrices.htmlParametrising Take the QR decomposition Iterative normalising Householder reflections Givens rotation Parametric sub families Structured Higher rank References In which I think about parameterisations and implementations of finite dimensional energy-preserving operators, a.k.a. matrices. A particular nook in the the linear feedback process library, closely related to stability in linear dynamical systems, since every orthonormal matrix is the forward operator of an energy-preserving system, which is an edge case for certain natural types of stability.Neural nets with basis decomposition layers
https://danmackinlay.name/notebook/nn_basis.html
Tue, 09 Mar 2021 12:06:42 +1100https://danmackinlay.name/notebook/nn_basis.htmlNeural networks with continuous basis functions Convolutional neural networks as sparse coding References Neural networks incorporating basis decompositions.
Why might you want to do this? For one it is a different lense to analyze neural nets’ mysterious success through. For another, it gives you interpolation for free. There are possibly other reasons - perhaps the right basis gives you better priors for undersstanding a partial differential equation?Matrix measure concentration inequalities and bounds
https://danmackinlay.name/notebook/matrix_concentration.html
Mon, 08 Mar 2021 11:08:41 +1100https://danmackinlay.name/notebook/matrix_concentration.htmlMatrix Chernoff Matrix Chebychev Matrix Bernstein Matrix Efron-Stein Gaussian References Concentration inequalities for matrix-valued random variables.
Recommended overviews are J. A. Tropp (2015); van Handel (2017); Vershynin (2018).
Matrix Chernoff J. A. Tropp (2015) summarises:
In recent years, random matrices have come to play a major role in computational mathematics, but most of the classical areas of random matrix theory remain the province of experts.Measure concentration inequalities
https://danmackinlay.name/notebook/concentration_of_measure.html
Thu, 04 Mar 2021 09:34:53 +1100https://danmackinlay.name/notebook/concentration_of_measure.htmlBackground Markov Chebychev Chernoff Hoeffding Efron-Stein Kolmogorov Gaussian Sub-Gaussian Martingale bounds Khintchine Empirical process theory Matrix concentration References A corral captures the idea of concentration of measure; we have some procedure that guarantees that most of mass (of buffalos) is where we can handle it. Image: Kevin M Klerks, CC BY 2.0
Welcome to the probability inequality mines!
When something in your process (measurement, estimation) means that you can be pretty sure that a whole bunch of your stuff is particularly likely to be somewhere in particular.Random fields as stochastic differential equations
https://danmackinlay.name/notebook/random_fields_as_sdes.html
Mon, 01 Mar 2021 17:08:40 +1100https://danmackinlay.name/notebook/random_fields_as_sdes.htmlCreating a stationary Markov SDE with desired covariance Convolution representations Covariance representation Input measures \(\mu\) is a hypercube \(\mu\) is the unit sphere \(\mu\) is an isotropic Gaussian Without stationarity via Green’s functions References \(\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\sinc}{\operatorname{sinc}}\)
The representation of certain random fields, especially Gaussian random fields as stochastic differential equations. This is the engine that makes filtering Gaussian processes go, and is also a natural framing for probabilistic spectral analysis.Stochastic partial differential equations
https://danmackinlay.name/notebook/spdes.html
Wed, 27 Jan 2021 12:42:41 +1100https://danmackinlay.name/notebook/spdes.htmlReferences Placeholder, for the multidimensional PDE version of SDEs.
This picture of ice floes on the Bering shelf looks like it might be some kinda stochastic PDE thing, right?
References Bolin, David, and Kristin Kirchner. 2020. “The Rational SPDE Approach for Gaussian Random Fields With General Smoothness.” Journal of Computational and Graphical Statistics 29 (2): 274–85. https://doi.org/10.1080/10618600.2019.1665537. Dalang, Robert C., Davar Khoshnevisan, and Firas Rassoul-Agha, eds.Neural nets for “implicit representations”
https://danmackinlay.name/notebook/nn_implicit_rep.html
Thu, 21 Jan 2021 12:50:41 +1100https://danmackinlay.name/notebook/nn_implicit_rep.htmlReferences A cute hack for generative neural nets. Unlike other structures, here we allow the output to depend upon image coordinates, rather than some presumed-invariant latent factors. I am not quite sure what the rational is for implicit being used as a term here. What representation are implict or explicit are particularly viewpoint-dependent.
NB this is different to the “implicit layers” trick, which allows an optimistation problem to be implicitly solved in a neural net.Statistical mechanics of statistics
https://danmackinlay.name/notebook/statistical_mechanics_of_statistics.html
Wed, 06 Jan 2021 12:46:59 +1100https://danmackinlay.name/notebook/statistical_mechanics_of_statistics.htmlPhase transitions in statistical inference Replicator equations and evolutionary processes References Boaz Barak has a miniature dictionary for statisticians:
I’ve always been curious about the statistical physics approach to problems from computer science. The physics-inspired algorithm survey propagation is the current champion for random 3SAT instances, statistical-physics phase transitions have been suggested as explaining computational difficulty, and statistical physics has even been invoked to explain why deep learning algorithms seem to often converge to useful local minima.Nonparametrically learning dynamical systems
https://danmackinlay.name/notebook/nn_learning_dynamics.html
Tue, 08 Dec 2020 13:05:58 +1100https://danmackinlay.name/notebook/nn_learning_dynamics.htmlQuestions Tools References Learning stochastic differential equations. Related: Analysing a neural net itself as a dynamical system, which is not quite the same but crosses over. Variational state filters.
A deterministic version of this problem is what e.g. the famous Vector Institute Neural ODE paper (Chen et al. 2018) did. Author Duvenaud argues that in some ways the hype ran away with the Neural ODE paper, and credits CasADI with the innovations here.Probabilistic spectral analysis
https://danmackinlay.name/notebook/probabilistic_spectral_analysis.html
Wed, 25 Nov 2020 11:33:34 +1100https://danmackinlay.name/notebook/probabilistic_spectral_analysis.htmlClassic: stochastic processes studied via correlation function Non-stationary spectral kernel Change point detection version Non-Gaussian approaches References Graphical introduction to nonstationary modelling of audio data. The input (bottom) is a sound recording of female speech. We seek to decompose the signal into Gaussian process carrier waveforms (blue block) multiplied by a spectrogram (green block). The spectrogram is learned from the data as a nonnegative matrix of weights times positive modulators (top).Hidden Markov Model inference for Gaussian Process regression
https://danmackinlay.name/notebook/gp_filtering.html
Wed, 25 Nov 2020 11:28:43 +1100https://danmackinlay.name/notebook/gp_filtering.htmlSpatio-temporal usage Miscellaneous notes towards implementation References Classic flavours together, Gaussian processes and state filters/ stochastic differential equations and random fields as stochastic differential equations.
I am interested here in the trick which makes certain Gaussian process regression problems soluble by making them local, i.e. Markov, with respect to some assumed hidden state, in the same way Kalman filtering does Wiener filtering. This means you get to solve a GP as an SDE.Observability and sensitivity in learning dynamical systems
https://danmackinlay.name/notebook/sensitivity.html
Mon, 09 Nov 2020 13:38:40 +1100https://danmackinlay.name/notebook/sensitivity.htmlReferences The contact between ergodic theorems and statistical identifiability. How precisely can I learn a given parameter of a dynamical system from observation? In ODE theory a useful concept is sensitivity analysis, which tells us how much gradient information our observations give us about a parameter. This comes in local (at my current estimate) and global (for all parameter ranges) flavours
In linear systems theory the term observability is used to discuss whether we can in fact identify a parameter or a latent state, which I will conflate for the current purposes.Itō-Taylor expansion
https://danmackinlay.name/notebook/stochastic_taylor_expansion.html
Thu, 15 Oct 2020 13:38:07 +1100https://danmackinlay.name/notebook/stochastic_taylor_expansion.htmlReferences Placeholder, for discussing the Taylor expansion equivalent for an SDE.
Let \(f\) denote a smooth function. Then from Itō’s lemma, \[ f\left(X_{t}\right)=f\left(X_{0}\right)+\int_{s=0}^{t} L^{0} f\left(X_{s}\right) d s+\int_{s=0}^{t} L^{1} f\left(X_{s}\right) d B_{s} \] where the operators \(L^{0}\) and \(L^{1}\) are defined by \[ L^{0}=a(x) \frac{\partial}{\partial x}+\frac{1}{2} b(x)^{2} \frac{\partial^{2}}{\partial x^{2}} \quad \text { and } \quad L^{1}=b(x) \frac{\partial}{\partial x} \] We may repeat this procedure arbitrarily many times.Filter design, linear
https://danmackinlay.name/notebook/filter_design_linear.html
Fri, 18 Sep 2020 10:15:52 +1000https://danmackinlay.name/notebook/filter_design_linear.htmlRelationship of discrete LTI to continuous time filters Quick and dirty digital filter design State-Variable Filters Time-varying IIR filters References Linear Time-Invariant (LTI) filter design is a field of signal processing, and a special case of state filtering that doesn’t necessarily involve a hidden state.
z-Transforms, bilinear transforms, Bode plots, design etc.
I am going to consider this in discrete time (i.e. for digital implementation) unless otherwise stated, because I’m implementing this in software, not with capacitors or whatever.Online learning
https://danmackinlay.name/notebook/online_learning.html
Wed, 26 Aug 2020 16:48:40 +1000https://danmackinlay.name/notebook/online_learning.htmlMirror descent Follow-the-regularized leader Parameter-free Covariance References An online learning perspective gives bounds on the regret: the gap between in performance between online estimation and the optimal estimator when we have access to the entire data.
A lot of things are sort-of online learning; stochastic gradient descent, for example, is closely related. However, if you meet someone who claims to study “online learning” they usually mean to emphasis particular things.Stochastic signal sampling
https://danmackinlay.name/notebook/signal_sampling_stochastic.html
Thu, 11 Jun 2020 06:45:08 +1000https://danmackinlay.name/notebook/signal_sampling_stochastic.htmlReferences Signal sampling is the study of approximating continuous signals with discrete ones and vice versa. What if the signal you are trying to recover is random, but you have a model for that randomness, and can thus assign likelihoods (posterior probabilities even) to some sample paths.? Now you are sampling a stochastic process.
This is a particular take on a classic inverse problem that arises in many areas, framed how electrical engineers frame it.Functional regression
https://danmackinlay.name/notebook/functional_data.html
Thu, 28 May 2020 11:17:20 +1000https://danmackinlay.name/notebook/functional_data.htmlRegression using curves Functional autoregression References Statistics where the samples are not just data but whole curves and manifolds, or subsamples from them. Function approximation meets statistics.
Regression using curves To quote Jim Ramsay:
Functional data analysis, […] is about the analysis of information on curves or functions. For example, these twenty traces of the writing of “fda” are curves in two ways: first, as static traces on the page that you see after the writing is finished, and second, as two sets functions of time, one for the horizontal “X” coordinate, and the other for the vertical “Y” coordinate.Voice fakes
https://danmackinlay.name/notebook/voice_fakes.html
Wed, 27 May 2020 20:42:58 +1000https://danmackinlay.name/notebook/voice_fakes.htmlStyle transfer Text to speech References A placeholder. Generating speech, without a speaker, or possibly style transferring speech.
Style transfer You have a recording of me saying something self-incriminating. You would prefer it to be a recording Hillary Clinton saying something incriminating. This is achievable.
There has been a tendency for the open source ones to be fairly mediocre while the the pay-to-play options leave provocative demos about but do not let you use them.Malliavin calculus
https://danmackinlay.name/notebook/malliavin_calculus.html
Mon, 25 May 2020 08:15:48 +1000https://danmackinlay.name/notebook/malliavin_calculus.htmlReferences This is actually the Northern Lights in 1883, but let us pretend it is something to do with Malliavin calculus
You can calculate a derivative of densities for stochastic processes in some generalised sense which I do not at present understand, and do the normal calculus thing you do with a derivative. Stochastic differential equations arise, presumably ones in some sense involving this generalised derivative, can then solve some kind of problems for you.Lévy stochastic differential equations
https://danmackinlay.name/notebook/levy_sdes.html
Sat, 23 May 2020 18:19:58 +1000https://danmackinlay.name/notebook/levy_sdes.htmlReferences Stochastic differential equations driven by Lévy noise are not so tidy as Itō diffusions (although they are still somewhat tidy), so they are frequently brushed aside in stochastic calculus texts. But I need ’em! There is a developed sampling theory for these creatures called sparse stochastic process theory.
Possibly also chaos expansions might be a useful tool for modelling these, and/or Malliavin calculus whatever that is.Stochastic differential equations
https://danmackinlay.name/notebook/stochastic_differential_equations.html
Mon, 18 May 2020 12:23:18 +1000https://danmackinlay.name/notebook/stochastic_differential_equations.htmlReferences Placeholder.
SDEs are time-indexed, causal stochastic processes which notionally integrate an ordinary differential equation over some driving noise. As seen in state filters, optimal control, financial mathematics etc.
Terminology problem: when people talk about these they often mean, conceptually, stochastic integral equations, in the sense that the driving noise process is an integrator. When you differentiate the noise process, it leads, AFAICT to Malliavin calculus.Limit Theorems
https://danmackinlay.name/notebook/limit_theorems.html
Wed, 06 May 2020 17:18:13 +1000https://danmackinlay.name/notebook/limit_theorems.htmlReferences Many things are similar in the eventual limit.
We use asymptotic approximations all the time in statistics. Often it is implicitly, through a hypothesis test or an information penalty. We use the delta method to motivate robust statistics, or infinite neural networks.
There is much to be said on the various central limit theorems, but I will not be the one to say it right this minute, because this is a placeholder for a massive filed.Deep learning as a dynamical system
https://danmackinlay.name/notebook/nn_dynamical.html
Thu, 02 Apr 2020 17:26:01 +1100https://danmackinlay.name/notebook/nn_dynamical.htmlConvnets/Resnets as discrete PDE approximations References Image: Donny Darko
A recurring movement within neural network learning research which tries to render the learning of prediction functions tractable by considering them as dynamical systems, and using the theory of stability in the context of Hamiltonians, optimal control and/or ODE solvers, to make it all work.
I’ve been interested by this since seeing the (Haber and Ruthotto 2018) paper, but it’s got a kick from T.Nonparametrically learning spatiotemporal systems
https://danmackinlay.name/notebook/nn_spatiotemporal.html
Thu, 02 Apr 2020 17:26:01 +1100https://danmackinlay.name/notebook/nn_spatiotemporal.htmlReferences On learning stochastic partial differential equations and other processes using neural networks, gaussian processes and other differentiable techniques. Uses the tools of dynamical NNs and their ilk. Probably handy for machine learning physics.
I know little about this yet. But here are some links
References Arridge, Simon, Peter Maass, Ozan Öktem, and Carola-Bibiane Schönlieb. 2019. “Solving Inverse Problems Using Data-Driven Models.” Acta Numerica 28 (May): 1–174.Effective sample size
https://danmackinlay.name/notebook/effective_sample_size.html
Tue, 03 Mar 2020 12:14:58 +1100https://danmackinlay.name/notebook/effective_sample_size.htmlStatistics Monte Carlo estimation References We have an estimator \(\hat{\theta}}\) of some statistic which is, for the sake of argument, presumed to be the mean calculated from observations of some stochastic process \(\mathsf{v}\). Under certain assumptions we can use central limit theorems to find that the variance of our estimator calculated from \(N\) i.i.d. samples is given by \(\operatorname{Var}(\hat{\theta}})\propto 1/N.\) Effective Sample Size (ESS) gives us a different \(N\), \(N_{\{text{Eff}}\) such that \(\operatorname{Var}(\hat{\theta}})\propto 1/N_{\{text{Eff}}.Cepstral transforms and harmonic identification
https://danmackinlay.name/notebook/cepstrum.html
Thu, 13 Feb 2020 19:19:46 +1100https://danmackinlay.name/notebook/cepstrum.htmlReferences See also machine listening, system identification.
The cepstrum of a time series takes the represents the power-spectrogram using a log link function. I haven’t actually read the foundational literature here (e.g. Bogert, Healy, and Tukey 1963), merely used some algorithms; but it seems to be mostly a hack for rapid identification of correlation lags where said lags are long.
For a generalized modern version, see Proietti and Luati (2019).Potential theory in probability
https://danmackinlay.name/notebook/potential_theory_probability.html
Wed, 12 Feb 2020 09:57:03 +1100https://danmackinlay.name/notebook/potential_theory_probability.htmlReferences Placeholder. I am unfamiliar with potential theory a thing in itself. I keep running in to it, in Markov stochastic proccesses and in graphical models and would like to know that I understand the tools I am using properly. Some at least of the results seems to be terminological updates of words I know already, other perhaps not.
References Doyle, Peter G, and J Laurie Snell.Infinitesimal generators
https://danmackinlay.name/notebook/infinitesimal_generators.html
Wed, 05 Feb 2020 09:33:10 +1100https://danmackinlay.name/notebook/infinitesimal_generators.htmlReferences At first I found it hard to visualise infinitesimal generators but perhaps this simple diagram will help
\[\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\bf}[1]{\mathbf{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}{\mathbb{I}} \renewcommand{\Ex}{\mathbb{E}} \renewcommand{\Pr}{\mathbb{P}}\]
This note exists because no one explained to me satisfactorily to me why I should care about infinitesimal generators. These mysterious creatures pop up in the study of certain continuous time Markov processes, such as stochastic differential equations driven by Lévy noise.Divisibility, decomposability, stability
https://danmackinlay.name/notebook/divisible_distributions.html
Tue, 28 Jan 2020 12:48:19 +1100https://danmackinlay.name/notebook/divisible_distributions.htmlInfinitely divisible Decomposable Self-decomposable Stable Induced processes References 🏗 all of these are about sums; but presumably we can construct this over other algebraic structures of distributions, e.g. max-stable processes.
For now, some handy definition disambiguation.
Infinitely divisible The Lévy process quality.
A probability distribution is infinitely divisible if it can be expressed as the probability distribution of the sum of any arbitrary natural number of independent and identically distributed random variables.Non-uniform signal sampling
https://danmackinlay.name/notebook/signal_sampling_nonuniform.html
Tue, 03 Dec 2019 08:18:29 +1100https://danmackinlay.name/notebook/signal_sampling_nonuniform.htmlReferences Signal sampling without a uniform grid and thus a simple Nyquist Theorem. It turns out that this generalisation is not necessarily fatal for the theory.
Reviews in a functional analysis setting are given in (Piroddi and Petrou 2004; Babu and Stoica 2010; Unser 2000; Adcock et al. 2014; Adcock and Hansen 2016).
This problem AFAICT becomes much easier if one can use use priors to provide a theoretically tractable model of the nonuniformly sampled signal.Convergence of random variables
https://danmackinlay.name/notebook/convergence_of_random_variables.html
Tue, 03 Dec 2019 08:06:01 +1100https://danmackinlay.name/notebook/convergence_of_random_variables.htmlPlaceholder. When does one random variable approach another? There are many ways we could define this concept and various interesting relationships between these ways.
Djalil Chafaï, About convergence of random varibles.
To discuss: metrics versus types of convergences. The magical world o a.s. convergence which has no metric. The role of probability spaces.Cherchez la martingale
https://danmackinlay.name/notebook/martingales.html
Sat, 30 Nov 2019 18:09:40 +0100https://danmackinlay.name/notebook/martingales.htmlReferences Like Markov processes, a weirdly useful class of stochastic processes. Often you can find a martingale within some stochastic process, or construct a martingale from a stochastic process and prove something nifty thereby; This idea connects and solves a bunch of tricky problems at once.
TODO: examples, maybe a CLT and something else wacky like the life table estimators of (Aalen 1978).
I am indebted to Saif Syed for setting my head straight about the utility of martingales, and Kevin Ross who, in part of Amir Dembo’s course materials, was the one whose explanation of the orthogonality interpretation of martingales finally communicated the neatness of this idea to me.Optimal control
https://danmackinlay.name/notebook/optimal_control.html
Fri, 01 Nov 2019 12:58:54 +1100https://danmackinlay.name/notebook/optimal_control.htmlNuts and bolts Online References Nothing to see here; I don’t do optimal control. But here are some notes for when I thought I might.
Feedback Systems: An Introduction for Scientists and Engineers by Karl J. Åström and Richard M. Murray is an interesting control systems theory course from Caltech.
The online control blog post mentioned below has a summary:
Perhaps the most fundamental setting in control theory is a LDS is with quadratic costs \(c_t\) and i.Gaussian processes on lattices
https://danmackinlay.name/notebook/gp_on_lattices.html
Wed, 30 Oct 2019 13:23:08 +1100https://danmackinlay.name/notebook/gp_on_lattices.htmlReferences Gaussian Processes with a stationary kernel are faster if you are working on a grid of points. The main tricks here seem to be circulant embeddings and circulant approximations, which enable one to leverage fast Fourier transforms. This complements, perhaps, the trick of filtering Gaussian processes.
References Chan, G., and A. T. A. Wood. 1999. “Simulation of Stationary Gaussian Vector Fields.” Statistics and Computing 9 (4): 265–68.Synchronisation and rhythm
https://danmackinlay.name/notebook/sync.html
Wed, 23 Oct 2019 09:31:25 +1100https://danmackinlay.name/notebook/sync.htmlMaking rhythms Neurological/Psychological basis To read Breakbeat cuts Periodicity Analysis References In the deserts of Sudan
And the gardens of Japan
From Milan to Yucatan
Every woman, every man
Hit me with your rhythm stick
Hit me, hit me
Je t'adore, ich liebe dich
Hit me, hit me, hit me
Hit me with your rhythm stick
Hit me slowly, hit me quick
Hit me, hit me, hit meDelays and reverbs for audio processing
https://danmackinlay.name/notebook/delays.html
Tue, 22 Oct 2019 14:30:13 +1100https://danmackinlay.name/notebook/delays.htmlDesigning stable delays Designing allpass delays Designing delay lengths Delays for signal interpolations Things to try References In which I think about parameterisations and implementations of audio recurrence for use in music.
A particular nook in the the linear feedback process library.
Designing stable delays Also, parameterising stable Multi-Input-Multi-Output (MIMO) in signal processing can be done by using a Orthogonal and unitary matrices as the transfer operator, parameterising as stable linear systems.State filtering parameters
https://danmackinlay.name/notebook/recursive_estimation.html
Tue, 01 Oct 2019 15:33:56 +1000https://danmackinlay.name/notebook/recursive_estimation.htmlClassic recursive estimation Iterated filtering Questions Basic Construction Awaiting filing Implementations References a.k.a. state space model calibration, recursive identification. Sometimes indistinguishable from online estimation.
State filters are cool for estimating time-varying hidden states given known fixed system parameters. How about learning those parameters of the model generating your states? Classic ways that you can do this in dynamical systems include basic linear system identification, and general system identification.Correlograms
https://danmackinlay.name/notebook/correlograms.html
Sun, 22 Sep 2019 13:23:31 +1000https://danmackinlay.name/notebook/correlograms.htmlReferences This material is revised and expanded from the appendix of draft versions of a recent conference submission, for my own reference. I used (deterministic) correlograms a lot in that, and it was startlingly hard to find a decent summary of their properties anywhere. Nothing new here, but… see the matrial about doing this in a probabilistic way via Wiener-Khintchine representation and covariance kernels which lead to a natural probabilistic spectral analysis.Nonparametric state filters via Gaussian Processes
https://danmackinlay.name/notebook/gp_state_filters.html
Wed, 18 Sep 2019 10:21:15 +1000https://danmackinlay.name/notebook/gp_state_filters.htmlReferences Two classic flavours together, Gaussian Processes and state filters. There are other nonparametric state filters, e.g. Variational filters and particle filters.
This is a kind of a dual to using a state filter to calculate a Gaussian process regression as a computational shorthand.
Here we use Gaussian processes to define the filter, in particular to learn nonparametric transition, observation or state densities for a generalized Kalman filter.Signal sampling
https://danmackinlay.name/notebook/signal_sampling.html
Fri, 08 Mar 2019 12:07:33 +1100https://danmackinlay.name/notebook/signal_sampling.htmlReferences DSP is all about when you can approximate discrete systems with continuous ones and vice versa. Sampling theorems. Nyquist rates, Compressive sampling, nonuniform signal sampling, stochastic signal sampling, signatures of rough paths, etc.
There are a few ways to frame this. Traditionally we talk about Shannon sampling theorems, Nyquist rates and so on. To be frank, I haven’t actually read Shannon, because the setup is not useful for the types of problems I face in my work, although I’m sure it boils down to some similar results.Decaying sinusoid dictionaries
https://danmackinlay.name/notebook/decaying_sinusoids.html
Mon, 07 Jan 2019 11:45:01 +1100https://danmackinlay.name/notebook/decaying_sinusoids.htmlInner products of decaying sinusoidal atoms Normalizing decaying sinusoidal atoms Normalizing decaying sinusoidal molecules To file References \(\renewcommand{\var}{\operatorname{Var}} \renewcommand{\dd}{\mathrm{d}} \renewcommand{\pd}{\partial} \renewcommand{\bb}[1]{\mathbb{#1}} \renewcommand{\vv}[1]{\boldsymbol{#1}} \renewcommand{\mm}[1]{\boldsymbol{#1}} \renewcommand{\mmm}[1]{\mathrm{#1}} \renewcommand{\cc}[1]{\mathcal{#1}} \renewcommand{\ff}[1]{\mathfrak{#1}} \renewcommand{\oo}[1]{\operatorname{#1}} \renewcommand{\gvn}{\mid} \renewcommand{\II}[1]{\mathbb{I}\{#1\}} \renewcommand{\inner}[2]{\langle #1,#2\rangle} \renewcommand{\Inner}[2]{\left\langle #1,#2\right\rangle} \renewcommand{\argmax}{\mathop{\mathrm{argmax}}} \renewcommand{\argmin}{\mathop{\mathrm{argmin}}} \renewcommand{\omp}{\mathop{\mathrm{OMP}}}\)
Notes on some calculations with decaying sinusoid atoms as a sparse dictionary basis.
Consider an \(L_2\) signal \(f: \bb{R}\to\bb{R}.\) We will overload notation and write it with free argument \(\xi\), so that \(f(r\xi-\phi),\) for example, refers to the signal \(\xi\mapsto f(r\xi-\phi).Variational state filtering
https://danmackinlay.name/notebook/state_filters_variational.html
Fri, 07 Dec 2018 12:39:45 +1100https://danmackinlay.name/notebook/state_filters_variational.htmlReferences A placeholder; State filtering and estimation where the unobserved state and/or process noise are variationally-learned distributions. For now the only version that is even peripherally related to my work is the Gaussian process state filter.
References Archer, Evan, Il Memming Park, Lars Buesing, John Cunningham, and Liam Paninski. 2015. “Black Box Variational Inference for State Space Models.” November 23, 2015. http://arxiv.org/abs/1511.07367. Bayer, Justin, and Christian Osendorfer.Sparse stochastic processes and sampling
https://danmackinlay.name/notebook/sparse_stochastic_processes.html
Mon, 29 Oct 2018 09:01:58 +1100https://danmackinlay.name/notebook/sparse_stochastic_processes.htmlReferences Sampling theory for SDEs driven by Lévy noise. which produces a nice inference theory and gives us a machinery for producing prior for Bayesian sensing problems where the signal is known to be non-Gaussian. I have not got much to say about this yet. In particular I should say what “sparse” implies in this context. 🏗
Related maybe, signatures of rough paths.
References Amini, Arash, Michael Unser, and Farokh Marvasti.Feedback system identification, linear
https://danmackinlay.name/notebook/system_identification_linear.html
Tue, 23 Oct 2018 16:51:07 +1100https://danmackinlay.name/notebook/system_identification_linear.htmlIntros Instrumental variable regression Unevenly sampled Model estimation/system identification Slotting Method of transformed coefficients State filters Online Misc Linear Predictive Coding References In system identification, we infer the parameters of a stochastic dynamical system of a certain type, i.e. usually one with feedback, so that we can e.g. simulate it, or deconvolve it to find the inputs and hidden state, maybe using state filters.Ergodic theory / mixing
https://danmackinlay.name/notebook/ergodic_mixing.html
Tue, 23 Oct 2018 13:17:15 +1100https://danmackinlay.name/notebook/ergodic_mixing.htmlCoupling from the past Mixing zoo β-mixing ϕ-mixing Sequential Rademacher complexity References 🏗
The World’s Simplest Ergodic Theorem Von Neumann and Birkhoff’s Ergodic Theorems Relevance to actual stochastic processes and dynamical systems, especially linear and non-linear system identification.
Keywords to look up:
probability-free ergodicity Birkhoff ergodic theorem Frobenius-Perron operator Quasicompactness, correlation decay C&C CLT for Markov chains — Nagaev Not much material here, but please see learning theory for dependent data for some interesting categorisations of mixing and transcendence of miscellaneous mixing conditions for statistical estimators.Signal processing
https://danmackinlay.name/notebook/signal_processing.html
Fri, 05 Jan 2018 22:02:13 +1100https://danmackinlay.name/notebook/signal_processing.htmlSignal processing on graphs Stochastic decomposition Sampling Resources References Signal processing is a discipline dedicated to the engineering end of stochastic process inference and prediction, especially linear time series
There are various translation difficulties for statisticians; “Testing”=“Detection”, “Linear Filter”=“ARIMA model”, estimation of parameters is system identification, estimation of hidden states is filtering and so on.
This is a general note to mention that the field exists.