Probably actually reading/writing

March 5, 2020 — December 20, 2024

Figure 1

Stuff that I am currently actively reading or otherwise working on. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.

1 Triage

2 Notes

I need to reclassify the bio computing links; that section has become confusing and there are too many nice ideas there not clearly distinguished.

3 Currently writing

Not all published yet.

  1. Foundation models and their world models

  2. Community building

    1. collective care
    2. social calendaring
    3. psychological resilience
  3. Reality gap

  4. continual learning

  5. Is academic literary studies actually distinct from the security discipline of studying side-channel attacks?

  6. Goodhart coordination

  7. Australian political economy

  8. unconferences

  9. structural problems are hard let’s do training programs

  10. Extraversion

  11. Is residual prediction different from adversarial prediction?

  12. Science communication for ML

  13. Human superorganisms

    1. Movement design
    2. Returns on hierarchy
    3. Effective collectivism
    4. Alignment
    5. Emancipating my tribe, the cruelty of collectivism (and why I love it anyway)
    6. Institutions for angels
    7. Institutional alignment
    8. Beliefs and rituals of tribes
    9. Where to deploy taboo
    10. The Great Society will never feel great, merely be better than the alternatives
    11. Egregores etc
    12. Player versus game
    13. Something about the fungibility of hipness and cash
    14. Monastic traditions
  14. Approximate conditioning

  15. nested sampling

  16. What even are GFlownets?

  17. public sphere business models

  18. how to do house stuff (renovation etc)

  19. Power and inscrutability

  20. strategic ignorance

  21. What is an energy based model?? tl;dr branding for models that handle likelihoods through a potential function which is not normalised to be a density. I do not think there is anything new about that per se?

  22. Funny-shaped learning

    1. Causal attention
    2. graphical ML
    3. gradient message passing
    4. All inference is already variational inference
  23. Human learner series

    1. Which self?

    2. Is language symbolic?

    3. Our moral wetware

    4. Is is ought

    5. Morality under uncertainty and computational constraint

    6. Superstimuli

    7. Clickbait bandits

    8. correlation construction

    9. Moral explainability

      1. Burkean conservatism is about unpacking when moral training data is out-of-distribution
      2. Something about universal grammar and its learnable local approximations, versus universal ethics and its learnable local approximations. Morality by template, computational difficulty of moral identification. Leading by example of necessity.
    10. righting and wronging

    11. Akrasia in stochastic processes: What time-integrated happiness should we optimise?

    12. Comfort traps ✅ Good enough for now

    13. Myths ✅ a few notes is enough

  24. Classification and society series

    1. Constructivist rationalism
    2. Affirming the consequent and evaporative tribalism.
    3. Classifications are not very informative
    4. Adversarial categorization
    5. AUC and collateral damage
    6. bias and base rates
    7. Decision theory
    8. decision theory and prejudice
  25. Shouting at each other on the internet series (Teleological liberalism)

    1. Modern politics seems to be excellent at reducing the vast spectrum of policy space to two mediocre choices then arguing about which one is worse. What is this tendency called?
    2. The Activist and decoupling games, and game-changing
    3. on being a good weak learner
    4. lived evidence deductions and/or ad hominem for discussing genetic arguments.
    5. diffusion of responsibility — is this distinct from messenger shooting?
    6. Iterative game theory of communication styles
    7. Invasive arguments
    8. Coalition games
    9. All We Need Is Hate
    10. Speech standards
    11. Startup justice warriors/move fast and cancel things
    12. Pluralism
  26. Learning in context

    1. Interaction effects are what we want
    2. Interpolation is what we want
    3. Optimal conditioning is what we want
    4. Correlation construction is easier than causation learning
  27. Epistemic community design

    1. Scientific community
    2. Messenger shooting
    3. on being a good weak learner
    4. Experimental ethics and surveillance
    5. Steps to an ecology of mind
    6. Epistemic bottlenecks is probably in this series too.
    7. Ensemble strategies at the population level. I don’t need to guess right, we need a society in which people in aggregate guess in a calibrated way.
  28. Epistemic bottlenecks and bandwidth problems

    1. Information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
    2. What is special about science? One thing is transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainability relate to transmissibility?
  29. DIY and the feast of fools

  30. Tail risks and epistemic uncertainty

    1. Black swan farming
    2. Wicked tail risks
    3. Planning under uncertainty
  31. economic dematerialization via

    1. Enclosing the intellectual commons
    2. creative economy jobs
  32. Academic publications as Veblen goods

  33. Stein variational gradient descent

  34. Edge of chaos, history of

  35. Ethical consumption

  36. X is Yer than Z

  37. But what can I do?

    1. starfish problems
  38. Haunting and exchangeability. Connection to interpolation, and individuation, and to legibility, and nonparametrics.

  39. Doing complicated things naively

  40. Conspiracies as simulations

  41. Something about the limits of legible fairness versus metis in common property regimes

  42. The uncanny ally

  43. Elliptical belief propagation

  44. Strategic ignorance

  45. privilege accountancy

  46. anthropic principles ✅ Good enough

  47. You can’t talk about us without us ❌ what did I even mean? something about mottes and baileys?

  48. subculture dynamics ✅ Good enough

  49. Opinion dynamics (memetics for beginners) ✅ Good enough

  50. Table stakes versus tokenism

  51. Iterative game theory under bounded rationality ❌ too general

  52. Memetics ❌ (too big, will never finish)

  53. Cradlesnatch calculator ✅ Good enough

  54. Singularity lite, the orderly retreat from relevance

4 music stuff

5 Misc

6 Workflow optimization

7 graphical models

8 “transfer” learning

9 Custom diffusion

10 Commoncog

11 Music skills

12 Internal

13 ICML 2023 workshop

14 Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? Can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

15 Conf, publication venues

16 Neurips 2022

17 Neurips 2021

18 Music

Nestup / cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.

19 Hot topics

20 Stein stuff

21 newsletter migration

22 GP research

22.1 Invenia’s GP expansion ideas

23 SDEs, optimization and gradient flows

Nguyen and Malinsky (2020)

Statistical Inference via Convex Optimization.

Conjugate functions illustrated.

Francis Bach on the use of geometric sums and a different take by Julyan Arbel.

Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.

24 Career tips and metalearning

25 Ensembles and particle methods

26 Foundations of ML

So much Michael Betancourt.

27 nonparametrics

28 References

Arya, Schauer, Schäfer, et al. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Lanyon, Álvarez, et al. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Koltun, and Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Takida, Murata, et al. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, and Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations.”
Phillips, Seror, Hutchinson, et al. 2022. Spectral Diffusion Processes.” In.
Rudner, Chen, Teh, et al. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Kempe, Fielding, et al. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In.
Wu, Maruyama, and Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.”