PIBBS x ILIAD Research residency January-February 2026
2026-01-02 — 2026-02-27
Wherein a Shoreditch residency at the London Initiative for Safe AI is recounted, held 5 January–13 February 2026, with lectures ranging from information decomposition to singular learning theory.
I was in the first cohort of the Principles of Intelligence Research Residency. It ran from 5 January to 13 February 2026 in London.
The pilot of the residency was successful. There will be some follow on programs that you yourself might wish to apply to.
The residency was based at the London Initiative for Safe AI in Shoreditch. It was a lot of productive chaos.
Here’s the blurb:
tl;dr: PIBBSS and Iliad are announcing a Research Residency in applied mathematics and AI alignment. Competitive candidates are PhD or postdoctoral (or equally experienced) researchers in math, physics, or a related field. Our target outcome is for Researchers-in-Residence to continue research after the Residency by securing philanthropic funding to either start new research groups or to support other research projects.
I can now announce the major outcome of this project for me which was the founding of the Alignment Journal; for more details see the announcement post.
1 Lecturers
Many! This long list is just a partial sample of the many luminaries who contributed. I missed a few lectures because there was just too much going on; Many of the connections I made while I was there led to projects, and there was not time to do everything.
- Tom Everitt
- Jesse Hoogland
- Paul Riechers
- Lucius Bushnaq
- Jeremy Gillen
- Max Hennick
- George Robinson
- Leon Lang
- Cole Wyeth
- Dmitry Vaintrob
- Chris Elliott
- Artemy Kolchinsky on agency and MaxEnt
- Guillaume Corlouer on Deep Linear Networks
- Alexander Shen (generally described as a “masterclass” chalk talk)
- Edmund Lau
- Martin Biehl
- …
2 Themes
No pretence of completeness—just the ones that seemed relevant.
2.1 Information decomposition
Several attendees (e.g. Jansma (2025); Kolchinsky (2022)) are interested in a principled use of partial information decomposition (Williams and Beer 2010).
2.2 Edge of stability, edge of chaos
Another theme was the edge of stability and the edge of chaos.
2.3 Embeddedness
See the embedded agency post.
2.4 “Model Local Learning”
Lucius Bushnaq of Goodfire takes a crack at building intuitive models for “generic learners”.
2.5 Computational no-coincidence conjectures
George Robinson presented some work on computational coincidences.
2.6 Reinforcement learning meets SLT
Chris Elliott presented Elliott et al. (2026), which applies Singular Learning Theory to reinforcement learning. I liked the construction where the policy acquires a global Gibbs posterior; as Chris noted, it makes some strong assumptions. Still, I learned a lot.
2.7 Intentionality and agency
So much on this topic (Biehl and Virgo 2023; Hafner et al. 2022; Meulemans et al. 2025).
I synthesized some into intentional language is OK, causally embedded agency, and agency embedded in causality.
I’m still trying to process Martin’s presentation, based on Biehl and Virgo (2023) and Virgo, Biehl, and McGregor (2021). Martin pulled in lots of fun tools (string diagrams, category-theoretic systems, “lumping”, as in (Teza, Stella, and GrandPre 2025) of coupled Markov systems…) to argue about internal model theorems and the intentional stance.
3 Singular learning theory
Jesse Hoogland gave an introductory lecture, but I missed it.
Edmund presented some results from his thesis, connecting minimum description length and singular learning theory via a round-trip through Jeffreys’ priors (B. S. Clarke and Barron 1990; Bertrand S. Clarke and Barron 1994) and the Kraft–McMillan inequality of coding theory.
Max Hennick presented loads of provocative parallels between the phase transitions of statistical learning and physics, like the phase-field theory of crystal growth.
Also—an aside from Lucius: From SLT to AIT: NN generalization out-of-distribution — LessWrong.
Fernando Rosas presented his work on internal models as transducer reduction.
