Top influences of 2025

Content that changed my life this year, and which also might change yours

2025-11-09 — 2026-01-02

Wherein the author’s year’s influences are catalogued, and it is noted that AI tools are used more than ever to assist reading, with consequential reorientation toward the study of human–AI interfaces being effected.

AI safety
economics
language
review
wonk

Assumed audience:

Laypeople and also nerds like me

Figure 1

I read a lot this year, as I do every year, but more than ever I used AI tools to help with much of it. Some of it was useful in specialist ways; I don’t expect many people will care. Some of it, though, made my life different, so I’m mentioning it here. I “agree” entirely with relatively few of these pieces. The criterion for inclusion is that the piece influenced me, not necessarily that it agrees with me. That is to say, this list does constitute an endorsement of quality, but not of the content as such.

Not all of this was written this year. I don’t read the whole internet the moment it updates.

1 Conferences, symposia, workshops

2 Civitech

Digital governance and civitech, and other notions of what social coordination could or should be in the age of AI

I’m spinning up a modest civitech project at the moment in my home, Melbourne/Naarm.

3 Oh wow — it turns out that Human–Computer Interaction is not only real, it’s existentially important

I always thought HCI was too soft and pre-paradigmatic to cohere into a real science. And too vague to make useful predictions.

I was wrong. Turns out it’s the discipline that studies the interface between human and machine intelligence, and as such is crucial for our survival. Moreover, it turns out that although the median HCI paper might be a bit meh, the best ones are really good.

You can see what I mean by looking at some of my recent funding proposals, on human reward tampering, human-AI asymmetries …

I now hold that actually-existing AI only makes sense as a coupled human–AI system, and we need to understand what that system does.

Relatedly…

4 Societal Epistemic Health

How do we maintain a healthy infosphere in the sense of having good beliefs and avoiding bad beliefs, at societal scale, with ubiquitous deepfakes and LLMs that can generate plausible-sounding bullshit?

Some readings that helped me think about this:

5 Stochastic parrotology

Ornithology of aleatoric psittaciformes. Chineseboxology.

These are abstruse ways I might describe the questions of predictive pretraining versus agency. Can pretraining lead to understanding? Can it lead to agency? Is this “causally embedded learning” (/notebook/causally_embedded.qmd) framing going to make this all make sense? When?

6 Political Economy of Actually-existing AI

The hype/delivery landscape is fractally complex, and anyone whose take reads like “it will obviously go like X” isn’t credible to me — they’re merely feeding the hot-take furnace.

Some types of hype are predictably nonsense. Never before has it been so painfully clear that whichever institution issues qualifications for “thought leadership” really needs to work on its syllabus.

That said, certain analyses I think are worth paying more attention to for their ability to help us discern the possibilities. Here are some pieces that attempt to bring at least some vague knowledge of technical and socio-economic systems to bear on the political economy of AI.

7 Thomas Urquhart

Guys, this dissolute, self-sabotaging, anarchic genius has all his works online, and I missed it. Did you know he wrote a proposal, Logopandecteision, for the ultimate language in which all words described themselves by their sounds, but did not finish it because most of the book consists of him complaining about his creditors? Did you know he invented a language, Trissotetras, whose utterances are trigonometric proofs? Quixotic mathematical-linguistic genius whose time has finally come.

8 Fiction

Max Gladstone’s Craft sequence has always been interestingly political. The conceit in the series is that the laws of physics are like… laws that can be challenged in court: corporate lawyers notice, and a whole industry of “Craft” lawyers and engineers instantly springs up to litigate reality itself and industrialize divinity. But that’s so last decade; the new series, The Craft Wars, feels very of-the-moment: untrustworthy synthetic superintelligences, locked in high-leverage expansions, duel for market share.

Darkome by Hannu Rajaniemi: biohacking cypherpunks in a corporate-locked-down, plague-and-climate-collapse near-future Burning Man. I hope civilization endures long enough to read the sequel.

9 References

Binmore. 2010. Game Theory and Institutions.” Journal of Comparative Economics, Symposium: The Dynamics of Institutions,.
Bullock, Hammond, and Krier. 2025. AGI, Governments, and Free Societies.”
Costello, Pennycook, and Rand. 2024. Durably Reducing Conspiracy Beliefs Through Dialogues with AI.” Science.
Dezfouli, Nock, and Dayan. 2020. Adversarial Vulnerabilities of Human Decision-Making.” Proceedings of the National Academy of Sciences.
Duque, Aghajohari, Cooijmans, et al. 2025. Advantage Alignment Algorithms.” In.
Hammond, and Adam-Day. 2025. Neural Interactive Proofs.” In.
Hyland, Gavenčiak, Costa, et al. 2024. Free-Energy Equilibria: Toward a Theory of Interactions Between Boundedly-Rational Agents.” In.
Kamenica, and Gentzkow. 2011. Bayesian Persuasion.” American Economic Review.
Kolchinsky, Marvian, Gokler, et al. 2025. Maximizing Free Energy Gain.” Entropy.
Kulveit, Douglas, Ammann, et al. 2025. Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development.”
Ng, Fong, Frazier, et al. 2025. TabMGP: Martingale Posterior with TabPFN.”
Qiu, He, Chugh, et al. 2025. The Lock-in Hypothesis: Stagnation by Algorithm.” In.
Still, Sivak, Bell, et al. 2012. Thermodynamics of Prediction.” Physical Review Letters.