Probably actually reading/writing

March 5, 2020 — May 30, 2024

Figure 1

Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.

1 Triage

2 Notes

I need to reclassify the bio computing links; that section has become confusing and there are too many nice ideas there not clearly distinguished.

3 Currently writing

Not all published yet, expect broken links.

  1. continual learning.

  2. Is academic literary studies actually distinct from teh security discipline of studying side-channel attacks?

  3. Goodhart coordination

  4. Australian political economy

  5. unconferences

  6. structural problems are hard let’s do training programs

  7. Extraversion

  8. Is residual prediction different to adversarial prediction?

  9. Science communication for ML

  10. Human superorganisms

    1. Movement design
    2. Returns on hierarchy
    3. Effective collectivism
    4. Alignment
    5. Emancipating my tribe, the cruelty of collectivism (and why I love it anyway)
    6. Institutions for angels
    7. Institutional alignment
    8. Beliefs and rituals of tribes
    9. Where to deploy taboo
    10. The Great Society will never feel great, merely be better than the alternatives
    11. Egregores etc
    12. Player versus game
    13. Something about the fungibility of hipness and cash
    14. Monastic traditions
  11. Approximate conditioning

  12. nested sampling

  13. What even are GFlownets?

  14. public sphere business models

  15. how to do house stuff (renovation etc)

  16. Power and inscrutability

  17. strategic ignorance

  18. What is an energy based model?? tl;dr branding for models that handle likelihoods though a potential function which is not normalised to be a density. I do not think there is anything new there per se?

  19. Funny-shaped learning

    1. Causal attention
    2. graphical ML
    3. gradient message passing
    4. All inference is already variational inference
  20. Human learner series

    1. Our moral wetware
    2. Something about universal grammar and its learnable local approximations, versus universal ethics and its learnable local approximations. Morality by template, computational difficulty of moral identification. Leading by example of necessity.
    3. Burkean conservatism is about unpacking when moral training data is out-of-distribution
    4. Morality under uncertainty and computational constraint
    5. Superstimuli
    6. Clickbait bandits
    7. correlation construction
    8. Moral explainability
    9. righting and wronging
    10. Akrasia in stochastic processes: What time-integrated happiness should we optimise?
    11. ~~Comfort traps ~~ ✅ Good enough for now
    12. Myths ✅ a few notes is enough
  21. Classification and society series

    1. Affirming the consequent and evaporative tribalism.
    2. Classifications are not very informative
    3. Adversarial categorization
    4. AUC and collateral damage
    5. bias and base rates
    6. Decision rules
    7. decision rules and bigotry
  22. Shouting at each other on the internet series (Teleological liberalism)

    1. Modern politics seems to be excellent at reducing the vast spectrum of policy space to two mediocre choices then arguing about which one is worse. What is this tendency called?
    2. The Activist and decoupling games, and game-changing.
    3. on being a good weak learner
    4. lived evidence deductions and/or ad hominem for discussing genetic arguments.
    5. diffusion of responsibility — is this distinct from messenger shooting?
    6. Iterative game theory of communication styles
    7. Invasive arguments
    8. Coalition games
    9. All We Need Is Hate
    10. Speech standards
    11. Player versus game
    12. Startup justice warriors/move fast and cancel things
    13. Pluralism
  23. Learning in context

    1. Interaction effects are what we want
    2. Interpolation is what we want
    3. Optimal conditioning is what we want
    4. correlation construction
  24. Epistemic community design

    1. Scientific community
    2. Messenger shooting
    3. on being a good weak learner
    4. Experimental ethics and surveillance
    5. Steps to an ecology of mind
    6. Epistemic bottlenecks is probably in this series too.
    7. Ensemble strategies at the population level. I don’t need to guess right, we need a society in which people in aggregate guess in a calibrated way.
  25. Epistemic bottlenecks and bandwidth problems

    1. Information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
    2. What is special about science? One thing is transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainability relate to transmissibility?
  26. DIY and the feast of fools

  27. Tail risks and epistemic uncertainty

    1. Black swan farming
    2. Wicked tail risks
    3. Planning under uncertainty
  28. economic dematerialisation via

    1. Enclosing the intellectual commons
    2. creative economy jobs
  29. Academic publications as Veblen goods

  30. Stein variational gradient descent

  31. Edge of chaos, history of

  32. Ethical consumption

  33. X is Yer than Z

  34. But what can I do?

    1. starfish problems
  35. Haunting and exchangeability. Connection to interpolation, and individuation, and to legibility, and nonparametrics.

  36. Doing complicated things naively

  37. Conspiracies as simulations

  38. Something about the limits of legible fairness versus metis in common property regimes

  39. The uncanny ally

  40. Elliptical belief propagation

  41. Strategic ignorance

  42. privilege accountancy

  43. anthropic principles ✅ Good enough

  44. You can’t talk about us without us ❌ what did I even mean? something about mottes and baileys?

  45. subculture dynamics ✅ Good enough

  46. Opinion dynamics (memetics for beginners) ✅ Good enough

  47. Table stakes versus tokenism

  48. Iterative game theory under bounded rationality ❌ too general

  49. Memetics ❌ (too big, will never finish)

  50. Cradlesnatch calculator ✅ Good enough

  51. Singularity lite, the orderly retreat from relevance

4 music stuff

5 Misc

6 Workflow optimisation

7 graphical models

8 “transfer” learning

9 Custom diffusion

10 Commoncog

11 Music skills

12 Internal

13 ICML 2023 workshop

14 Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

15 Conf, publication venues

16 Neurips 2022

17 Neurips 2021

18 Music

Nestup / cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.

19 Hot topics

20 Stein stuff

21 newsletter migration

22 GP research

22.1 Invenia’s GP expansion ideas

23 SDEs, optimisation and gradient flows

Nguyen and Malinsky (2020)

Statistical Inference via Convex Optimization.

Conjugate functions illustrated.

Francis Bach on the use of geometric sums and a different take by Julyan Arbel.

Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.

24 Career tips and metalearning

25 Ensembles and particle methods

26 Foundations of ML

So much Michael Betancourt.

27 nonparametrics

28 References

Arya, Schauer, Schäfer, et al. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Lanyon, Álvarez, et al. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Koltun, and Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Takida, Murata, et al. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, and Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations.”
Phillips, Seror, Hutchinson, et al. 2022. Spectral Diffusion Processes.” In.
Rudner, Chen, Teh, et al. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Kempe, Fielding, et al. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In.
Wu, Maruyama, and Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.”