Probably actually reading/writing



Stuff that I am currently actively reading or otherwise working upon. If you are looking at this, and you aren’t me, you should re-evaluate your hobbies.

See also my more aspirational paper reading list.

Currently writing

Not all published yet.

  1. Billionaires? Elites? Minorities? Classes? Capitalism? Socialism? It is coordination problems all the way down.
  2. anthropic principles ✅ Good enough
  3. You can’t talk about us without us ❌ what did I even mean
  4. subculture dynamics ✅ Good enough
  5. Myths ✅ a few notes is enough
  6. Opinion dynamics (memetics for beginners) ✅ Good enough
  7. Table stakes versus tokenism
  8. Iterative game theory under bounded rationality ❌ too general
  9. Something about the fungibility of hipness and cash
  10. Pluralism
  11. Memetics ❌ (too big, will never finish)
  12. Cradlesnatch calculator ✅ Good enough
  13. lived evidence deductions and/or ad hominem for discussing genetic arguments.
  14. bias and baserates
  15. Stein variational gradient descent
  16. Edge of chaos, history of
  17. Interaction effects
  18. Human superorganisms
  19. Invasive arguments
  20. Movement design
  21. Ethical consumption
  22. X is Yer than Z
  23. Scientific community
  24. But what can I do?
  25. Decision rules
  26. Experimental ethics and surveillance
  27. Haunting
  28. Speech standards
  29. Black swan farming
  30. Doing complicated things naively
  31. Conspiracies as simulations
  32. Something about the limits of legible fairness versus metis in common property regimes
  33. Emancipating my tribe and the cruelty of collectivism (and why I love it anyway)
  34. Institutions for angels
  35. Lived experience in hypothesis testing
  36. Beliefs and rituals of tribes, optimisation thereof for our moral wetware
  37. Iterative game theory of communication styles
  38. The uncanny ally
  39. Adversarial categorization
  40. Messenger shooting
  41. Startup justice warriors/move fast and cancel things
  42. Elliptical belief propagation
  43. Akrasia in stochastic Hilbert space: What time-integrated happiness should we optimise?
  44. “The problem with Bernoulli regression is that binary outcomes just aren’t very informative,” one of my colleagues said to me in the context of a regression problem. Now I have decided that there is some meat on this bone. TODO: revisit the informativeness of categories about their covariates for the post-Imagenet era, from a classic vector quantisation perspective. Then: Deep learning classifiers as a model for legibility.
  45. Where to deploy taboo
  46. Strategic ignorance
  47. privilege accountancy
  48. What is special about science? Transmissibility. Can chatGPT do transmission? Or is it 100% tacit? How does explainabilitiy relate to transmissibility?

Neurips 2022 follow-ups

  1. Arya et al. (2022) — stochastic gradients are more general than deterministic ones because they are defined on discrete vars
  2. Rudner et al. (2022)
  3. Phillips et al. (2022) — diffusions in the spectral domain allow us to handle continuous function valued inputs
  4. Gahungu et al. (2022)
  5. Wu, Maruyama, and Leskovec (2022) LE-PDE is a learnable low-rank approximation method
  6. Holl, Koltun, and Thuerey (2022) — Physics loss via forward simulations, without the need for sensitivity.
  7. Neural density estimation
  8. Metrics for inverse design and inverse inference problems - the former is in fact easier. Or is it? can we simply attain forward prediction loss?
  9. Noise injection in emulator learning (see refs in Su et al. (2022))

Career tips and metalearning

References

Arya, Gaurav, Moritz Schauer, Frank Schäfer, and Christopher Vincent Rackauckas. 2022. Automatic Differentiation of Programs with Discrete Randomness.” In.
Gahungu, Paterne, Christopher W. Lanyon, Mauricio A. Álvarez, Engineer Bainomugisha, Michael Thomas Smith, and Richard David Wilkinson. 2022. Adjoint-Aided Inference of Gaussian Process Driven Differential Equations.” In.
Holl, Philipp, Vladlen Koltun, and Nils Thuerey. 2022. Scale-Invariant Learning by Physics Inversion.” In.
Lai, Chieh-Hsin, Yuhta Takida, Naoki Murata, Toshimitsu Uesaka, Yuki Mitsufuji, and Stefano Ermon. 2022. Regularizing Score-Based Models with Score Fokker-Planck Equations.” In.
Nguyen, Long, and Andy Malinsky. 2020. “Exploration and Implementation of Neural Ordinary Differential Equations,” 34.
Phillips, Angus, Thomas Seror, Michael John Hutchinson, Valentin De Bortoli, Arnaud Doucet, and Emile Mathieu. 2022. Spectral Diffusion Processes.” In.
Rudner, Tim G. J., Zonghao Chen, Yee Whye Teh, and Yarin Gal. 2022. Tractable Function-Space Variational Inference in Bayesian Neural Networks.” In.
Su, Jingtong, Julia Kempe, Drummond Fielding, Nikolaos Tsilivis, Miles Cranmer, and Shirley Ho. 2022. “Adversarial Noise Injection for Learned Turbulence Simulations.” In, 7.
Wu, Tailin, Takashi Maruyama, and Jure Leskovec. 2022. Learning to Accelerate Partial Differential Equations via Latent Global Evolution.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.