Probably actually reading/writing

March 5, 2020 — May 30, 2024

Figure 1

Stuff that I am currently actively reading or otherwise working on. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.

1 Triage

2 Notes

I need to reclassify the bio computing links; that section has become confusing and there are too many nice ideas there not clearly distinguished.

3 Currently writing

Not all published yet, expect broken links.

  1. community building

    1. collective care
    2. social calendaring
    3. psychological resilience
  2. Reality gap

  3. continual learning

  4. Is academic literary studies actually distinct from the security discipline of studying side-channel attacks?

  5. Goodhart coordination

  6. Australian political economy

  7. unconferences

  8. structural problems are hard let’s do training programs

  9. Extraversion

  10. Is residual prediction different from adversarial prediction?

  11. Science communication for ML

  12. Human superorganisms

    1. Movement design
    2. Returns on hierarchy
    3. Effective collectivism
    4. Alignment
    5. Emancipating my tribe, the cruelty of collectivism (and why I love it anyway)
    6. Institutions for angels
    7. Institutional alignment
    8. Beliefs and rituals of tribes
    9. Where to deploy taboo
    10. The Great Society will never feel great, merely be better than the alternatives
    11. Egregores etc
    12. Player versus game
    13. Something about the fungibility of hipness and cash
    14. Monastic traditions
  13. Approximate conditioning

  14. nested sampling

  15. What even are GFlownets?

  16. public sphere business models

  17. how to do house stuff (renovation etc)

  18. Power and inscrutability

  19. strategic ignorance

  20. What is an energy based model?? tl;dr branding for models that handle likelihoods through a potential function which is not normalised to be a density. I do not think there is anything new about that per se?

  21. Funny-shaped learning

    1. Causal attention
    2. graphical ML
    3. gradient message passing
    4. All inference is already variational inference
  22. Human learner series

    1. Our moral wetware
    2. Something about universal grammar and its learnable local approximations, versus universal ethics and its learnable local approximations. Morality by template, computational difficulty of moral identification. Leading by example of necessity.
    3. Burkean conservatism is about unpacking when moral training data is out-of-distribution
    4. Morality under uncertainty and computational constraint
    5. Superstimuli
    6. Clickbait bandits
    7. correlation construction
    8. Moral explainability
    9. righting and wronging
    10. Akrasia in stochastic processes: What time-integrated happiness should we optimise?
    11. Comfort traps ✅ Good enough for now
    12. Myths ✅ a few notes is enough
  23. Classification and society series

    1. Affirming the consequent and evaporative tribalism.
    2. Classifications are not very informative
    3. Adversarial categorization
    4. AUC and collateral damage
    5. bias and base rates
    6. Decision theory
    7. decision theory and prejudice
  24. Shouting at each other on the internet series (Teleological liberalism)

    1. Modern politics seems to be excellent at reducing the vast spectrum of policy space to two mediocre choices then arguing about which one is worse. What is this tendency called?
    2. The Activist and decoupling games, and game-changing
    3. on being a good weak learner
    4. lived evidence deductions and/or ad hominem for discussing genetic arguments.
    5. diffusion of responsibility — is this distinct from messenger shooting?
    6. Iterative game theory of communication styles
    7. Invasive arguments
    8. Coalition games
    9. All We Need Is Hate
    10. Speech standards
    11. Player versus game
    12. Startup justice warriors/move fast and cancel things
    13. Pluralism
  25. Learning in context

    1. Interaction effects are what we want
    2. Interpolation is what we want
    3. Optimal conditioning is what we want
    4. Correlation construction is easier than causation learning
  26. Epistemic community design

    1. Scientific community
    2. Messenger shooting
    3. on being a good weak learner
    4. Experimental ethics and surveillance
    5. Steps to an ecology of mind
    6. Epistemic bottlenecks is probably in this series too.
    7. Ensemble strategies at the population level. I don’t need to guess right, we need a society in which people in aggregate guess in a calibrated way.
  27. Epistemic bottlenecks and bandwidth problems

    1. Information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
    2. What is special about science? One thing is transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainability relate to transmissibility?
  28. DIY and the feast of fools

  29. Tail risks and epistemic uncertainty

    1. Black swan farming
    2. Wicked tail risks
    3. Planning under uncertainty
  30. economic dematerialization via

    1. Enclosing the intellectual commons
    2. creative economy jobs
  31. Academic publications as Veblen goods

  32. Stein variational gradient descent

  33. Edge of chaos, history of

  34. Ethical consumption

  35. X is Yer than Z

  36. But what can I do?

    1. starfish problems
  37. Haunting and exchangeability. Connection to interpolation, and individuation, and to legibility, and nonparametrics.

  38. Doing complicated things naively

  39. Conspiracies as simulations

  40. Something about the limits of legible fairness versus metis in common property regimes

  41. The uncanny ally

  42. Elliptical belief propagation

  43. Strategic ignorance

  44. privilege accountancy

  45. anthropic principles ✅ Good enough

  46. You can’t talk about us without us ❌ what did I even mean? something about mottes and baileys?

  47. subculture dynamics ✅ Good enough

  48. Opinion dynamics (memetics for beginners) ✅ Good enough

  49. Table stakes versus tokenism

  50. Iterative game theory under bounded rationality ❌ too general

  51. Memetics ❌ (too big, will never finish)

  52. Cradlesnatch calculator ✅ Good enough

  53. Singularity lite, the orderly retreat from relevance

4 music stuff

5 Misc

6 Workflow optimization

7 graphical models

8 “transfer” learning

9 Custom diffusion

10 Commoncog

11 Music skills

12 Internal

13 ICML 2023 workshop

14 Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

15 Conf, publication venues

16 Neurips 2022

17 Neurips 2021

18 Music

Nestup / cutelabnyc/nested-tuplets: Fancy javascript for manipulating nested tuplets.

19 Hot topics

20 Stein stuff

21 newsletter migration

22 GP research

22.1 Invenia’s GP expansion ideas

23 SDEs, optimization and gradient flows

Nguyen and Malinsky (2020)

Statistical Inference via Convex Optimization.

Conjugate functions illustrated.

Francis Bach on the use of geometric sums and a different take by Julyan Arbel.

Tutorial to approximating differentiable control problems. An extension of this is universal differential equations.

24 Career tips and metalearning

25 Ensembles and particle methods

26 Foundations of ML

So much Michael Betancourt.

27 nonparametrics

28 References

Arya, Schauer, Schäfer, et al. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Lanyon, Álvarez, et al. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Koltun, and Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Takida, Murata, et al. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, and Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations.”
Phillips, Seror, Hutchinson, et al. 2022. Spectral Diffusion Processes.” In.
Rudner, Chen, Teh, et al. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Kempe, Fielding, et al. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In.
Wu, Maruyama, and Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.”