Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.
See also my more aspirational paper reading list.
Currently writing
Not all published yet.
Effective collectivism
Goodhart-Moloch-supernormal-alignment-utility
Enclosing the intellectual commons as economic dematerialisation
Academic publications as Veblen goods
Ensemble strategies at the population level. I don't need to guess right, we need a society in which people in aggregate guess in a calibrated way.
Epistemic bottlenecks and bandwidth problems
- information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
Billionaires? Elites? Minorities? Classes? Capitalism? Socialism? It is coordination problems all the way down.
anthropic principles✅ Good enoughYou can’t talk about us without us❌ what did I even meansubculture dynamics✅ Good enoughMyths✅ a few notes is enoughOpinion dynamics (memetics for beginners)✅ Good enoughIterative game theory under bounded rationality❌ too generalMemetics❌ (too big, will never finish)Cradlesnatch calculator✅ Good enoughlived evidence deductions and/or ad hominem for discussing genetic arguments.
Something about the limits of legible fairness versus metis in common property regimes
Emancipating my tribe and the cruelty of collectivism (and why I love it anyway)
Beliefs and rituals of tribes, optimisation thereof for our moral wetware
The uncanny ally
Startup justice warriors/move fast and cancel things
Akrasia in stochastic Hilbert space: What time-integrated happiness should we optimise?
“The problem with Bernoulli regression is that binary outcomes just aren’t very informative,” one of my colleagues said to me in the context of a regression problem. Now I have decided that there is some meat on this bone. TODO: revisit the informativeness of categories about their covariates for the post-Imagenet era, from a classic vector quantisation perspective. Then: Deep learning classifiers as a model for legibility.
What is special about science? Transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainabilitiy relate to transmissibility?
ICML 2023 workshop
- Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators | ICML 2023 Workshop, Honolulu, Hawaii — ICML 2023
- Structured Probabilistic Inference & Generative Modeling @ ICML — ICML 2023
- Duality Principles for Modern ML — ICML 2023
- Synergy of Scientific and Machine Learning Modeling — ICML 2023
Neurips 2022 follow-ups
- Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
- Rudner et al. (2022)
- Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
- Gahungu et al. (2022)
- Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
- Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
- Neural density estimation
- Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
- Noise injection in emulator learning (see refs in Su et al. (2022))
state space audio
Conf, publication venues
Neurips 2022
- NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems
- SBM @ NeurIPS
- Causal dynamics
- The Symbiosis of Deep Learning and Differential Equations (DLDE)
- Machine Learning and the Physical Sciences, NeurIPS 2022
- AI for Science: Progress and Promises
- Machine Learning Street Talk
Neurips 2021
- Storchastic: A Framework for General Stochastic Automatic Differentiation
- Causal Inference & Machine Learning: Why now?
- Real-Time Optimization for Fast and Complex Control Systems
- [2104.13478] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
- Cheng Soon Ong, Marc Peter Deisenroth | There and Back Again: A Tale of Slopes and Expectations (“Lets unify automatic differentiation, integration and mesage passing”
- David Duvenaud, J. Zico Kolter, Matt Johnson | Deep Implicit Layers: Neural ODEs, Equilibrium Models and Beyond
Hot topics
- Equations of Motion from a Time Series
- Path Integrals and Feynman Diagrams for Classical Stochastic Processes
- Inference for Stochastic Differential Equations
- George Ho, Modern Computational Methods for Bayesian Inference: A Reading List is a good curation of modern Bayes methods posts. The next links come from there
- will wolf on neural methods in Simulation-Based Inference
- will wolf, Deriving Expectation-Maximization
- will wolf, Deriving Mean-Field Variational Bayes
- Reality Is Just a Game Now
- Michael Bronstein, Graph Neural Networks as gradient flows, re: [2206.10991] Graph Neural Networks as Gradient Flows: understanding graph convolutions via energy
- M Bronstein’s ICLR 2021 Keynote, Geometric Deep Learning: The Erlangen Programme of ML
- How to write a great research paper
- The Notion of “Double Descent”
- Jaan on translating between variational terminology in physics and ML
- Sander on waveform audio
- yuge shi’s ELBO gradient post is excellent
- Francis Bach, the many faces of integration by parts.
- Bubeck on hot results in learning theory takes him far from the world of mirror descent, where i first met him. Also lectures well, IMO.
- Causality for Machine Learning
Stein stuff
GP research
- https://www.patreon.com/posts/new-linearized-69325387
- Regression-based covariance functions for nonstationary spatial modeling
- kalman-jax/sde_gp.py at master · AaltoML/kalman-jax
- AaltoML/kalman-jax: Approximate inference for Markov Gaussian processes using iterated Kalman smoothing, in JAX
Invenia’s GP expansion ideas
SDEs, optimisation and gradient flows
Nguyen and Malinsky (2020)
Statistical Inference via Convex Optimization.
Conjugate functions illustrated.
Francis Bach on the use of geometric sums and a different take by Julyan Arbel.
Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.
Career tips and metalearning
Making the right moves: A Practical Guide to Scientific Management for Postdocs and New Faculty
There is a Q&A site about this, Academia stackexchange
For early career types, classic blog Thesis Whisperer
Read Academic work-life-balance survey to feel like not bothering with academe.
AI research: the unreasonably narrow path and how not to be miserable
How to Become the Best in the World at Something
This is how skill stacking works. It’s easier and more effective to be in the top 10% in several different skills — your “stack” — than it is to be in the top 1% in any one skill.
Foundations of ML
So much Michael Betancourt.
- Probability Theory (For Scientists and Engineers)
- Course Notes 7: Gaussian Process Engineering | Michael Betancourt on Patreon
- Conditional Probability Theory (For Scientists and Engineers)
- Autodiff for Implicit Functions Paper Live Stream Wed 1/12 at 11 AM EST | Michael Betancourt on Patreon
- New Autodiff Paper | Michael Betancourt on Patreon
- Rumble in the Ensemble
- Scholastic Differential Equations | Michael Betancourt on Patreon
- Identity Crisis
- Invited Talk: Michael Bronstein
- Product Placement
- (Not So) Free Samples
- Updated Geometric Optimization Paper
- We Built Sparse City
No comments yet. Why not leave one?