PIBBS x ILLIAD Research residency January 2026
2026-01-02 — 2026-01-13
Wherein a research residency is described, being convened in London at the London Initiative for Safe AI from 5 January to 13 February 2026, and a syllabus of lectures in applied mathematics and AI alignment is outlined.
I’m part of the inaugural cohort of the Principles of Intelligence Research Residency, which runs from 5 January to 13 February 2026.
TL;DR: PIBBSS and Iliad are announcing a Research Residency in applied mathematics and AI alignment. Competitive candidates are PhD or postdoctoral (or equally experienced) researchers in math, physics, or a related field. Our target outcome is for Researchers-in-Residence to continue research after the Residency by securing philanthropic funding to either start new research groups or to support other research projects.
This should be fun.
Hit me up if you’re in London. We’ll be based at the London Initiative for Safe AI.
1 Lecturers
2 Attendees
TBD.
- I myself
- …
3 Implicit themes
3.1 Information decomposition
Several attendees (e.g. Jansma (2025); Kolchinsky (2022)) are interested in a more principled use of partial information decomposition (Williams and Beer 2010).
3.2 Edge of stability, edge of chaos
Another theme: the edge of stability and the edge of chaos.
3.3 Embeddedness
See the embedded agency post.
4 Lectures
This is a very partial listing; I’m a bit distracted.
4.1 “Model Local Learning”
Lucius Bushnaq of Goodfire takes a crack at intuitive models for “generic learners”.
4.2 Computational no-coincidence conjectures
George Robinson presented some work on computational coincidences.
4.3 Reinforcement learning meets SLT
Nonetheless, I learned things. Chris Elliott presented Elliott et al. (2026), which applies Singular Learning Theory to reinforcement learning. I liked the construction where the policy acquires a global Gibbs posterior; as he flagged, it has some strong assumptions.
