Probably actually reading/writing



Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you may need to consider re-evaluating your hobbies.

See also my more aspirational paper reading list.

Currently writing

Not all published yet.

  1. steps to an ecology of mind

  2. Effective collectivism

  3. Goodhart-Moloch-supernormal-alignment-utility

  4. Enclosing the intellectual commons as economic dematerialisation

  5. Academic publications as Veblen goods

  6. Ensemble strategies at the population level. I don't need to guess right, we need a society in which people in aggregate guess in a calibrated way.

  7. Epistemic bottlenecks and bandwidth problems

    1. information versus learning as a fundamental question of ML. When do we store exemplars on disk? When do we gradient updates? How much compute to spend on compressing?
  8. Billionaires? Elites? Minorities? Classes? Capitalism? Socialism? It is coordination problems all the way down.

  9. anthropic principles ✅ Good enough

  10. You can’t talk about us without us ❌ what did I even mean

  11. subculture dynamics ✅ Good enough

  12. Myths ✅ a few notes is enough

  13. Opinion dynamics (memetics for beginners) ✅ Good enough

  14. Table stakes versus tokenism

  15. Iterative game theory under bounded rationality ❌ too general

  16. Something about the fungibility of hipness and cash

  17. Pluralism

  18. Memetics ❌ (too big, will never finish)

  19. Cradlesnatch calculator ✅ Good enough

  20. lived evidence deductions and/or ad hominem for discussing genetic arguments.

  21. bias and base rates

  22. Stein variational gradient descent

  23. Edge of chaos, history of

  24. Interaction effects

  25. Human superorganisms

  26. Invasive arguments

  27. Movement design

  28. Ethical consumption

  29. X is Yer than Z

  30. Scientific community

  31. But what can I do?

  32. Decision rules

  33. Experimental ethics and surveillance

  34. Haunting

  35. Speech standards

  36. Black swan farming

  37. Doing complicated things naively

  38. Conspiracies as simulations

  39. Something about the limits of legible fairness versus metis in common property regimes

  40. Emancipating my tribe and the cruelty of collectivism (and why I love it anyway)

  41. Institutions for angels

  42. Lived experience in hypothesis testing

  43. Beliefs and rituals of tribes, optimisation thereof for our moral wetware

  44. Iterative game theory of communication styles

  45. The uncanny ally

  46. Adversarial categorization

  47. Messenger shooting

  48. Startup justice warriors/move fast and cancel things

  49. Elliptical belief propagation

  50. Akrasia in stochastic Hilbert space: What time-integrated happiness should we optimise?

  51. “The problem with Bernoulli regression is that binary outcomes just aren’t very informative,” one of my colleagues said to me in the context of a regression problem. Now I have decided that there is some meat on this bone. TODO: revisit the informativeness of categories about their covariates for the post-Imagenet era, from a classic vector quantisation perspective. Then: Deep learning classifiers as a model for legibility.

  52. Where to deploy taboo

  53. Strategic ignorance

  54. privilege accountancy

  55. What is special about science? Transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainabilitiy relate to transmissibility?

Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

Career tips and metalearning

References

Arya, Gaurav, Moritz Schauer, Frank Schäfer, and Christopher Vincent Rackauckas. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Paterne, Christopher W. Lanyon, Mauricio A. Álvarez, Engineer Bainomugisha, Michael Thomas Smith, and Richard David Wilkinson. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Philipp, Vladlen Koltun, and Nils Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Chieh-Hsin, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, and Stefano Ermon. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, Long, and Andy Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations,” 34.
Phillips, Angus, Thomas Seror, Michael John Hutchinson, Valentin De Bortoli, Arnaud Doucet, and Emile Mathieu. 2022. Spectral Diffusion Processes.” In.
Rudner, Tim G. J., Zonghao Chen, Yee Whye Teh, and Yarin Gal. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Jingtong, Julia Kempe, Drummond Fielding, Nikolaos Tsilivis, Miles Cranmer, and Shirley Ho. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In, 7.
Wu, Tailin, Takashi Maruyama, and Jure Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.