Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you should re-evaluate your hobbies.
See also my more aspirational paper reading list.
Not all published yet.
- Billionaires? Elites? Minorities? Classes? Capitalism? Socialism? It is coordination problems all the way down.
anthropic principles✅ Good enough You can’t talk about us without us❌ what did I even mean subculture dynamics✅ Good enough Myths✅ a few notes is enough Opinion dynamics (memetics for beginners)✅ Good enough Table stakes versus tokenism✅ Iterative game theory under bounded rationality❌ too general Something about the fungibility of hipness and cash Pluralism✅ Memetics❌ (too big, will never finish) Cradlesnatch calculator✅ Good enough
- lived evidence deductions and/or ad hominem for discussing genetic arguments.
- bias and baserates
- Stein variational gradient descent
- Edge of chaos, history of
- Interaction effects
- Human superorganisms
- Invasive arguments
- Movement design
- Ethical consumption
- X is Yer than Z
- Scientific community
- But what can I do?
- Decision rules
- Experimental ethics and surveillance
- Speech standards
- Black swan farming
- Doing complicated things naively
- Conspiracies as simulations
- Something about the limits of legible fairness versus metis in common property regimes
- Emancipating my tribe and the cruelty of collectivism (and why I love it anyway)
- Institutions for angels
- Lived experience in hypothesis testing
- Beliefs and rituals of tribes, optimisation thereof for our moral wetware
- Iterative game theory of communication styles
- The uncanny ally
- Adversarial categorization
- Messenger shooting
- Startup justice warriors/move fast and cancel things
- Elliptical belief propagation
- Akrasia in stochastic Hilbert space: What time-integrated happiness should we optimise?
- “The problem with Bernoulli regression is that binary outcomes just aren’t very informative,” one of my colleagues said to me in the context of a regression problem. Now I have decided that there is some meat on this bone. TODO: revisit the informativeness of categories about their covariates for the post-Imagenet era, from a classic vector quantisation perspective. Then: Deep learning classifiers as a model for legibility.
- Where to deploy taboo
- Strategic ignorance
- privilege accountancy
- What is special about science? Transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainabilitiy relate to transmissibility?
Neurips 2022 follow-ups
- Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
- Rudner et al. (2022)
- Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
- Gahungu et al. (2022)
- Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
- Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
- Neural density estimation
- Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
- Noise injection in emulator learning (see refs in Su et al. (2022))
state space audio
Conf, publication venues
- NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems
- SBM @ NeurIPS
- Causal dynamics
- The Symbiosis of Deep Learning and Differential Equations (DLDE)
- Machine Learning and the Physical Sciences, NeurIPS 2022
- AI for Science: Progress and Promises
- Machine Learning Street Talk
- Storchastic: A Framework for General Stochastic Automatic Differentiation
- Causal Inference & Machine Learning: Why now?
- Real-Time Optimization for Fast and Complex Control Systems
- [2104.13478] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
- Cheng Soon Ong, Marc Peter Deisenroth | There and Back Again: A Tale of Slopes and Expectations (“Lets unify automatic differentiation, integration and mesage passing”
- David Duvenaud, J. Zico Kolter, Matt Johnson | Deep Implicit Layers: Neural ODEs, Equilibrium Models and Beyond
- Equations of Motion from a Time Series
- Path Integrals and Feynman Diagrams for Classical Stochastic Processes
- Inference for Stochastic Differential Equations
- George Ho, Modern Computational Methods for Bayesian Inference: A Reading List is a good curation of modern Bayes methods posts. The next links come from there
- will wolf on neural methods in Simulation-Based Inference
- will wolf, Deriving Expectation-Maximization
- will wolf, Deriving Mean-Field Variational Bayes
- Reality Is Just a Game Now
- Michael Bronstein, Graph Neural Networks as gradient flows, re: [2206.10991] Graph Neural Networks as Gradient Flows: understanding graph convolutions via energy
- M Bronstein’s ICLR 2021 Keynote, Geometric Deep Learning: The Erlangen Programme of ML
- How to write a great research paper
- The Notion of “Double Descent”
- Jaan on translating between variational terminology in physics and ML
- Sander on waveform audio
- yuge shi’s ELBO gradient post is excellent
- Francis Bach, the many faces of integration by parts.
- Bubeck on hot results in learning theory takes him far from the world of mirror descent, where i first met him. Also lectures well, IMO.
- Causality for Machine Learning
- Regression-based covariance functions for nonstationary spatial modeling
- kalman-jax/sde_gp.py at master · AaltoML/kalman-jax
- AaltoML/kalman-jax: Approximate inference for Markov Gaussian processes using iterated Kalman smoothing, in JAX
Invenia’s GP expansion ideas
SDEs, optimisation and gradient flows
Nguyen and Malinsky (2020)
Career tips and metalearning
Making the right moves: A Practical Guide to Scientific Management for Postdocs and New Faculty
There is a Q&A site about this, Academia stackexchange
For early career types, classic blog Thesis Whisperer
Read Academic work-life-balance survey to feel like not bothering with academe.
This is how skill stacking works. It’s easier and more effective to be in the top 10% in several different skills — your “stack” — than it is to be in the top 1% in any one skill.
Foundations of ML
So much Michael Betancourt.
- Probability Theory (For Scientists and Engineers)
- Course Notes 7: Gaussian Process Engineering | Michael Betancourt on Patreon
- Conditional Probability Theory (For Scientists and Engineers)
- Autodiff for Implicit Functions Paper Live Stream Wed 1/12 at 11 AM EST | Michael Betancourt on Patreon
- New Autodiff Paper | Michael Betancourt on Patreon
- Rumble in the Ensemble
- Scholastic Differential Equations | Michael Betancourt on Patreon
- Identity Crisis
- Invited Talk: Michael Bronstein
- Product Placement
- (Not So) Free Samples
- Updated Geometric Optimization Paper
- We Built Sparse City