Probably actually reading/writing

March 5, 2020 — October 11, 2023

Figure 1

Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.

1 Triage

2 Notes

I need to reclassify the bio computing links; that section has become confusing and there are too many nice ideas there not clearly distinguished.

3 Currently writing

Not all published yet, expect broken links.

  1. structural problems are hard let’s do training programs

  2. Extraversion

  3. Is residual prediction different to adversarial prediction?

  4. Science communication for ML

  5. Human superorganisms

    1. Movement design
    2. Returns on hierarchy
    3. Effective collectivism
    4. Alignment
    5. Emancipating my tribe, the cruelty of collectivism (and why I love it anyway)
    6. Institutions for angels
    7. Institutional allignment
    8. Beliefs and rituals of tribes
    9. Where to deploy taboo
    10. The Great Society will never feel great, merely be better than the alternatives
    11. Egregores etc
    12. Player versus game
    13. Something about the fungibility of hipness and cash
  6. What even are GFlownets?

  7. public sphere business models

  8. how to do house stuff

  9. Power and inscrutability

  10. strategic ignorance

  11. What is an energy based model?? tl;dr branding for models that handle likelihoods though a potential function which is not normalised to be a density. I do not think there is anything new there per se

  12. Funny-shape learning

    1. Causal attention
    2. graphical ML
    3. gradient message passing
    4. All inference is already variational inference
  13. Human learner series

    1. Our moral wetware
    2. Something about universal grammar and its learnable local approximations, versus universal ethics and its learnable local approximations. Morality by template, computational difficulty of moral identification. Leading by example of necessity.
    3. Burkean conservatism is about unpacking when moral training data is Out-of-distribution
    4. Morality under uncertainty and computational constraint
    5. Superstimuli
    6. Clickbait bandits
    7. correlation construction
    8. Moral explainability
    9. Akrasia in stochastic processes: What time-integrated happiness should we optimise?
    10. ~~Comfort traps ~~ ✅ Good enough for now
    11. Myths ✅ a few notes is enough
  14. Classification and society series

    1. Affirming the consequent and evaporative tribalism.
    2. Classifications are not very informative
    3. Adversarial categorization
    4. AUC and collateral damage
    5. bias and base rates
    6. Decision rules
    7. decision rules and bigotry
  15. Shouting at each other on the internet series (Teleological liberalism)

    1. The Activist and decoupling games, and game-changing.
    2. lived evidence deductions and/or ad hominem for discussing genetic arguments.
    3. diffusion of responsibility — is this distinct from messenger shooting?
    4. Iterative game theory of communication styles
    5. Invasive arguments
    6. Coalition games
    7. All We Need Is Hate
    8. Speech standards
    9. Player versus game
    10. Startup justice warriors/move fast and cancel things
    11. Pluralism
  16. Learning in context

    1. Interaction effects are what we want
    2. Interpolation is what we want
    3. Optimal conditioning is what we want
    4. correlation construction
  17. Epistemic community design

    1. Scientific community
    2. Messenger shooting
    3. Experimental ethics and surveillance
    4. Steps to an ecology of mind
    5. Epistemic bottlenecks is probably in this series too.
    6. Ensemble strategies at the population level. I don’t need to guess right, we need a society in which people in aggregate guess in a calibrated way.
  18. Epistemic bottlenecks and bandwidth problems

    1. Information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
    2. What is special about science? One thing is transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainability relate to transmissibility?
  19. DIY and the feast of fools

  20. Tail risks and epistemic uncertainty

    1. Black swan farming
    2. Wicked tail risks
    3. Planning under uncertainty
  21. Enclosing the intellectual commons as economic dematerialisation

  22. Academic publications as Veblen goods

  23. Stein variational gradient descent

  24. Edge of chaos, history of

  25. Ethical consumption

  26. X is Yer than Z

  27. But what can I do?

    1. starfish problems
  28. Haunting and exchangeability. Connection to interpolation, and individuation, and to legibility, and nonparametrics.

  29. Doing complicated things naively

  30. Conspiracies as simulations

  31. Something about the limits of legible fairness versus metis in common property regimes

  32. The uncanny ally

  33. Elliptical belief propagation

  34. Strategic ignorance

  35. privilege accountancy

  36. anthropic principles ✅ Good enough

  37. You can’t talk about us without us ❌ what did I even mean? something about mottes and baileys?

  38. subculture dynamics ✅ Good enough

  39. Opinion dynamics (memetics for beginners) ✅ Good enough

  40. Table stakes versus tokenism

  41. Iterative game theory under bounded rationality ❌ too general

  42. Memetics ❌ (too big, will never finish)

  43. Cradlesnatch calculator ✅ Good enough

4 music stuff

5 Misc

6 Workflow optimisation

7 graphical models

8 “transfer” learning

9 Custom diffusion

10 Commoncog

11 Music skills

12 Internal

13 ICML 2023 workshop

14 Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

15 Conf, publication venues

16 Neurips 2022

17 Neurips 2021

18 Music

Nestup / cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.

19 Hot topics

20 Stein stuff

21 newsletter migration

22 GP research

22.1 Invenia’s GP expansion ideas

23 SDEs, optimisation and gradient flows

Nguyen and Malinsky (2020)

Statistical Inference via Convex Optimization.

Conjugate functions illustrated.

Francis Bach on the use of geometric sums and a different take by Julyan Arbel.

Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.

24 Career tips and metalearning

25 Ensembles and particle methods

26 Foundations of ML

So much Michael Betancourt.

27 nonparametrics

28 References

Arya, Schauer, Schäfer, et al. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Lanyon, Álvarez, et al. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Koltun, and Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Takida, Murata, et al. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, and Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations.”
Phillips, Seror, Hutchinson, et al. 2022. Spectral Diffusion Processes.” In.
Rudner, Chen, Teh, et al. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Kempe, Fielding, et al. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In.
Wu, Maruyama, and Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.”