PIBBS x ILIAD Research residency January-February 2026
2026-01-02 — 2026-02-06
Wherein a London research residency is conducted at the Shoreditch London Initiative for Safe AI, 5 January–13 February 2026, and lectures are delivered on agency and singular learning theory.
I’m in the first cohort of the Principles of Intelligence Research Residency. It runs from 5 January to 13 February 2026 in London.
Here’s the blurb:
TL;DR: PIBBSS and Iliad are announcing a Research Residency in applied mathematics and AI alignment. Competitive candidates are PhD or postdoctoral (or equally experienced) researchers in math, physics, or a related field. Our target outcome is for Researchers-in-Residence to continue research after the Residency by securing philanthropic funding to either start new research groups or to support other research projects.
This should be fun.
Hit me up if we’re in London at the same time. The residency is based at the London Initiative for Safe AI in Shoreditch.
1 Lecturers
Many! This long list is just a partial sample. I missed some because I had too many projects exploding at once.
- Tom Everitt
- Jesse Hoogland
- Paul Riechers
- Lucius Bushnaq
- Jeremy Gillen
- Max Hennick
- George Robinson
- Leon Lang
- Cole Wyeth
- Dmitry Vaintrob
- Chris Elliott
- Artemy Kolchinsky on agency and MaxEnt
- Guillaume Corlouer on Deep Linear Networks
- Alexander Shen (generally described as a “masterclass” chalk talk)
- Edmund Lau
- Martin Biehl
- …
2 Themes
No pretence of completeness—just the ones that seemed relevant.
2.1 Information decomposition
Several attendees (e.g. Jansma (2025); Kolchinsky (2022)) are interested in a principled use of partial information decomposition (Williams and Beer 2010).
2.2 Edge of stability, edge of chaos
Another theme: the edge of stability and the edge of chaos.
2.3 Embeddedness
See the embedded agency post.
2.4 “Model Local Learning”
Lucius Bushnaq of Goodfire takes a crack at building intuitive models for “generic learners”.
2.5 Computational no-coincidence conjectures
George Robinson presented some work on computational coincidences.
2.6 Reinforcement learning meets SLT
Chris Elliott presented Elliott et al. (2026), which applies Singular Learning Theory to reinforcement learning. I liked the construction where the policy acquires a global Gibbs posterior; as Chris noted, it makes some strong assumptions. Still, I learned a lot.
2.7 Intentionality and agency
So much on this topic (Biehl and Virgo 2023; Hafner et al. 2022; Meulemans et al. 2025).
I synthesized some into intentional language is ok, causally embedded agency, and agency embedded in causality.
I’m still trying to process Martin’s presentation based on Biehl and Virgo (2023) and Virgo, Biehl, and McGregor (2021), which pulled in lots of fun tools (string diagrams, category-theoretic systems, “lumping”, as in (Teza, Stella, and GrandPre 2025) of coupled Markov systems…) to argue about internal model theorems and the intentional stance.
3 Singular learning theory
Jesse Hoogland gave an introductory lecture, but I missed it.
Edmund presented some results from his thesis connecting minimum description length and singular learning theory via a round trip through Jeffreys priors (B. S. Clarke and Barron 1990; Bertrand S. Clarke and Barron 1994) and the Kraft–McMillan inequality of coding theory.
Max Hennick presented a whole lot of provocative parallels between the phase transitions of statistical learning and cool stuff from physics, like the phase-field theory of crystal growth.
Also — an aside from Lucius: From SLT to AIT: NN generalization out-of-distribution — LessWrong.
Fernando Rosas presented his work on internal models as transducer reduction.
