Stuff that I am currently actively reading. If you are looking at this, and you aren’t me, you should really be re-evaluating your hobbies.
See also my more aspirational paper reading list.
Currently writing
Not all published yet.
- Anthropic principles in general
- history of the edge of chaos
- You can’t talk about us without us
- Memetics (too big, will never finish)
- X is Yer than Z
- subculture dynamics
- Invasive arguments
- Movement design
- Table stakes versus tokenism
- Ethical consumption
- Opinion dynamics (memetics for beginners)
- Scientific community
- But what can I do?
- Decision rules
- interaction effects
- experimental ethics and surveillance
- Myths
- Haunting
- Something about the fungibility of hipness and cash
- Speech standards
- Black swan farming
- Where to deploy taboo
- Doing complicated things naively
- Conspiracies as simulations
- cradlesnatch calculator
- The limits of legible fairness versus metis; common property regimes
Neurips 2021
- Storchastic: A Framework for General Stochastic Automatic Differentiation
- Causal Inference & Machine Learning: Why now?
- Physical Reasoning and Inductive Biases for the Real World
- Real-Time Optimization for Fast and Complex Control Systems
- [2104.13478] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
gd
Foundations
- Course Notes 7: Gaussian Process Engineering | Michael Betancourt on Patreon
- Conditional Probability Theory (For Scientists and Engineers)
- conditional_probability.pdf
- Autodiff for Implicit Functions Paper Live Stream Wed 1/12 at 11 AM EST | Michael Betancourt on Patreon
- New Autodiff Paper | Michael Betancourt on Patreon
- Rumble in the Ensemble
- Scholastic Differential Equations | Michael Betancourt on Patreon
- Identity Crisis
- Invited Talk - Michael Bronstein
- Product Placement
- (Not So) Free Samples
- Sampling Case Study Live Stream Wed 4/14 at 2 PM EDT
- Updated Geometric Optimization Paper
- We Built Sparse City
GP research
- Bayesian inference with INLA
- R-INLA Project
- Linear Models from a Gaussian Process Point of View with Stheno and JAX
- Regression-based covariance functions for nonstationary spatial modeling
- kalman-jax/sde_gp.py at master · AaltoML/kalman-jax
- Probability Theory (For Scientists and Engineers)
- Scaling multi-output Gaussian process models with exact inference
- wesselb/stheno: Gaussian process modelling in Python
Invenia’s GP expansion ideas
Misc
- How to write a great research paper - Microsoft Research
- AaltoML/kalman-jax: Approximate inference for Markov Gaussian processes using iterated Kalman smoothing, in JAX
- Cheng Soon Ong, Marc Peter Deisenroth | There and Back Again: A Tale of Slopes and Expectations
- David Duvenaud, J. Zico Kolter, Matt Johnson | Deep Implicit Layers: Neural ODEs, Equilibrium Models and Beyond
- Overview · ADCME
- Encoder Autonomy | Machine Thoughts
- The Notion of “Double Descent” | Mad (Data) Scientist
- Jaan on translating between variational terminology in physics and ML
- Jaan on VAE
- pamamakouros on normalizing flows
- Eric Jang on normalizing flows
- Sander on typicality
- Sander on waveform audio
- yuge shi’s ELBO gradient post is excellent
- Francis Bach, the many faces of integration by parts.
- Efficiently sampling functions from Gaussian process posteriors
- Temenin on Riemannian GPs
- https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
- Whittle likelihood
- “Sethuraman” rep of Dirichlet proc = stick breaking
- Bubeck on hot results in learning theory takes him far from the world of mirror descent. Also lectures well, IMO.
- Causality for Machine Learning
General emulation
Spectral bizness
- QuantEcon/lecture-source-jl: Source files for “Lectures in Quantitative Economics” -- Julia version
- mschauer/Kalman.jl: Flexible filtering and smoothing in Julia
- QuantEcon.jl/kalman.jl at master · QuantEcon/QuantEcon.jl
- Whittle.pdf
- time_series - spectral_estimation.pdf
- mschauer/Kalman.jl: Flexible filtering and smoothing in Julia
- A First Look at the Kalman Filter – Quantitative Economics with Julia
- How a Kalman filter works, in pictures | Bzarg
SDEs in optimisation
To explore: Gradient flows gradient flow.
Nguyen and Malinsky (2020)
Statistical Inference via Convex Optimization.
Conjugate functions illustrated.
Francis Bach on the use of geometric sums and a different take by Julyan Arbel.
Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.
References
Nguyen, Long, and Andy Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations,” 34.
No comments yet. Why not leave one?