World models arising in foundation models.
2024-12-20 — 2026-02-05
Wherein embeddings from sundry models are found mappable by structure alone, without paired data, and neural speech activity is aligned linearly with such contextual vectors.
Placeholder notes on what kinds of world models sit inside large neural nets. It seems they do have some kind of internal model of the outside world; In practice what kind of thing is it?
1 Representational similarity
Are the semantics of embeddings or other internal representations in different models or modalities represented in a common “Platonic” space that’s universal in some sense (Huh et al. 2024b)? If so, should we care?
I confess I struggle to make this concrete enough to produce testable hypotheses; that’s probably because I haven’t read enough of the literature. Here’s something that might be progress:
- Jack Morris “Excited to finally share on arXiv what we’ve known for a while now: All Embedding Models Learn The Same Thing. Embeddings from different models are so similar that we can map between them based on structure alone — without any paired data. Feels like magic, but it’s real:🧵” (Jha et al. 2025)
My friend Pascal Hirsch mentions the hypothesis that
This should also apply to the embeddings people have in their brains, referring to this fascinating recent Google paper (Goldstein et al. 2025)
[…] neural activity in the human brain aligns linearly with the internal contextual embeddings of speech and language within LLMs as they process everyday conversations.
2 Causal world models
World models are somehow a somewhat different concept than “representation”; I’m not precisely sure how, but from skimming, it seems like world models might be easier to ground in causal abstraction and causal inference.
See causal abstraction for a discussion of the idea that a neural net’s latent space can end up discovering causal representations of the world.
3 Creating worlds to model
Rosas, Boyd, and Baltieri (2025) makes a pleasing connection to the simulation hypothesis:
Recent work proposes using world models to generate controlled virtual environments in which AI agents can be tested before deployment to ensure their reliability and safety. However, accurate world models often have high computational demands that can severely restrict the scope and depth of such assessments. Inspired by the classic `brain in a vat’ thought experiment, here we investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation. By following principles from computational mechanics, our approach reveals a fundamental trade-off in world model construction between efficiency and interpretability, demonstrating that no single world model can optimise all desirable characteristics. Building on this trade-off, we identify procedures to build world models that either minimise memory requirements, delineate the boundaries of what is learnable, or allow tracking causes of undesirable outcomes. In doing so, this work establishes fundamental limits in world modelling, leading to actionable guidelines that inform core design choices related to effective agent evaluation.
4 Incoming
- Condensation: a theory of concepts — Sam Eisenstat at MAISU 2025, talk and discussion
- Tom Wentworth’s Testing the Natural Abstraction Hypothesis: Project Intro
- Jon Kleinberg, AI’s Models of the World, and Ours | Theoretically Speaking
- How Does a Blind Model See the Earth? - by Henry — latent geographical “world” model (!)
- NeurIPS 2023 Tutorial: Language Models meet World Models
