Probably actually reading/writing



Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.

See also my more aspirational paper reading list.

Currently writing

Not all published yet, expect broken links.

  1. Human superorganisms

  2. Movement design

  3. Returns on hierarchy

  4. Effective collectivism

  5. alignment

  6. Emancipating my tribe and the cruelty of collectivism (and why I love it anyway)

  7. Institutions for angels

  8. Beliefs and rituals of tribes,

  9. Where to deploy taboo

  10. The Great Society

  11. Something about the fungibility of hipness and cash

  12. What even are GFlownets?

  13. Power, and measuring difficult things

    • Explode on Impact by Toby Lowe

      TL:DR — It is impossible for organisations to “demonstrate their impact” if they work in complex environments. Asking them to do so requires them to create a fantasy version of the story of their work. This corruption of data makes doing genuine change work harder because it is difficult to learn and adapt from corrupted data.

    • Decision Theory Remains Neglected - by Robin Hanson

      I say that their motives are more political: execs and their allies gain more by using other more flexible decision making frameworks for key decisions, frameworks with more wiggle room to help them justify whatever decision happens to favor them politically. Decision theory, in contrast, threatens to more strongly recommend a particular hard-to-predict decision in each case. As execs gain when the orgs under them are more efficient, they don’t mind decision theory being used down there. But they don’t want it up at their level and above, for decisions that say if they and their allies win or lose.

  1. The Human learner series

  2. Our moral wetware

  3. Superstimuli

  4. Clickbait bandits

  5. ~~Comfort traps ~~ ✅ Good enough for now

  6. Myths ✅ a few notes is enough

  7. Classification and society series

    1. Affirming the consequent, and thermodynamic tribalism.
    2. Classifications are not very informative
    3. Adversarial categorization
    4. AUC and collateral damage
    5. bias and base rates
    6. Decision rules
  8. Shouting at each other on the internet series (Teleological liberalism)

  9. The Activist and decoupling games, and game-changing.

  10. lived evidence deductions and/or ad hominem for discussing genetic arguments.

  11. diffusion of responsibility — is this distinct from messenger shooting?

  12. Iterative game theory of communication styles

  13. Invasive arguments

  14. Coalition games

  15. All We Need Is Hate

  16. Speech standards

  17. Startup justice warriors/move fast and cancel things

  18. Pluralism

  19. Epistemic community design

  20. Scientific community

  21. Messenger shooting

  22. Experimental ethics and surveillance

  23. Steps to an ecology of mind

  24. DIY and the feast of fools

  25. Enclosing the intellectual commons as economic dematerialisation

  26. Academic publications as Veblen goods

  27. Ensemble strategies at the population level. I don’t need to guess right, we need a society in which people in aggregate guess in a calibrated way.

  28. Epistemic bottlenecks and bandwidth problems

    1. Information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
    2. What is special about science? Transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainabilitiy relate to transmissibility?
  29. Stein variational gradient descent

  30. Edge of chaos, history of

  31. Interaction effects

  32. Ethical consumption

  33. X is Yer than Z

  34. But what can I do?

  35. Haunting and exchangeability

  36. Black swan farming

  37. Doing complicated things naively

  38. Conspiracies as simulations

  39. Something about the limits of legible fairness versus metis in common property regimes

  40. The uncanny ally

  41. Elliptical belief propagation

  42. Akrasia in stochastic processes: What time-integrated happiness should we optimise? Connection: contrastive learning

  43. Strategic ignorance

  44. privilege accountancy

  45. anthropic principles ✅ Good enough

  46. You can’t talk about us without us ❌ what did I even mean

  47. subculture dynamics ✅ Good enough

  48. Opinion dynamics (memetics for beginners) ✅ Good enough

  49. Table stakes versus tokenism

  50. Iterative game theory under bounded rationality ❌ too general

  51. Memetics ❌ (too big, will never finish)

  52. Cradlesnatch calculator ✅ Good enough

Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

Career tips and metalearning

References

Arya, Gaurav, Moritz Schauer, Frank Schäfer, and Christopher Vincent Rackauckas. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Paterne, Christopher W. Lanyon, Mauricio A. Álvarez, Engineer Bainomugisha, Michael Thomas Smith, and Richard David Wilkinson. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Philipp, Vladlen Koltun, and Nils Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Chieh-Hsin, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, and Stefano Ermon. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, Long, and Andy Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations,” 34.
Phillips, Angus, Thomas Seror, Michael John Hutchinson, Valentin De Bortoli, Arnaud Doucet, and Emile Mathieu. 2022. Spectral Diffusion Processes.” In.
Rudner, Tim G. J., Zonghao Chen, Yee Whye Teh, and Yarin Gal. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Jingtong, Julia Kempe, Drummond Fielding, Nikolaos Tsilivis, Miles Cranmer, and Shirley Ho. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In, 7.
Wu, Tailin, Takashi Maruyama, and Jure Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.