Universal artificial intelligence, AIXI

2024-12-01 — 2025-09-27

Wherein AIXI is described as a theoretical agent that combines Solomonoff induction with sequential decision theory, and is noted to be uncomputable due to Kolmogorov complexity.

adversarial
AI safety
catastrophe
economics
faster pussycat
innovation
language
machine learning
mind
neural nets
NLP
security
technology
Figure 1

AIXI [’ai̯k͡siː] is a thought experiment about “universal intelligence”. Think of it as “AI if compute were no object”, or, if we’d prefer, “AI at the asymptotic limit of the scaling laws”. It’s basically the most super-possible superintelligence.

It’s not meant to be implemented directly (it’s uncomputable); it’s a gold-standard reference model: “what would a perfectly rational agent look like if it had unlimited computing power?”

The construction is simple in spirit: it combines Solomonoff induction (a universal Bayesian prior over all computable environments) with sequential decision theory (choose actions that maximize expected future reward).

Formally, the value of an action sequence \(a_{1:m}\), given history \(h_t = a_1 o_1 r_1 \dots a_{t-1} o_{t-1} r_{t-1}\), is

\[ V^{\pi}(h_t) \;=\; \mathbb{E}_{\mu \sim \xi} \Bigg[ \sum_{k=t}^{\infty} r_k \;\Big|\; h_t, a_{t:\infty} \sim \pi \Bigg], \]

where:

At each step, AIXI chooses the action that maximizes this universal expectation of future reward.

So in words: AIXI is the policy that, in every possible computable world, maximizes expected total reward, weighted by how simple the world’s description is. This makes AIXI maximally intelligent in a precise, technical sense.

But since Kolmogorov complexity is uncomputable, AIXI is uncomputable too. People study approximations such as AIXItl (time- and length-limited), but AIXI’s main role is theoretical: it’s a yardstick for general intelligence, and a source of theorems about what such agents would do.

1 Incoming

2 References

Delétang, Ruoss, Duquenne, et al. 2024. Language Modeling Is Compression.”
Everitt, and Hutter. 2018. Universal Artificial Intelligence: Practical Agents and Fundamental Challenges.” In Foundations of Trusted Autonomy.
Everitt, Lea, and Hutter. 2018. AGI Safety Literature Review.”
Hayashi, and Takahashi. 2025. Universal AI Maximizes Variational Empowerment.”
Hutter. 2000. A Theory of Universal Artificial Intelligence Based on Algorithmic Complexity.”
———. 2005. Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Texts in Theoretical Computer Science.
———. 2007. Universal Algorithmic Intelligence: A Mathematical Top→Down Approach.” In Artificial General Intelligence.
———. 2012. “Can Intelligence Explode?” Journal of Consciousness Studies.
Hutter, Quarel, and Catt. 2024. An Introduction to Universal Artificial Intelligence.
Legg, and Hutter. 2007. Universal Intelligence: A Definition of Machine Intelligence.” Minds and Machines.
Leike, Leike, Hutter, et al. n.d. “Bad Universal Priors and Notions of Optimality.”
Soares. n.d. “Formalizing Two Problems of Realistic World-Models.”
Sunehag, and Hutter. 2013. Principles of Solomonoff Induction and AIXI.” In Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence: Papers from the Ray Solomonoff 85th Memorial Conference, Melbourne, VIC, Australia, November 30 – December 2, 2011. Lecture Notes in Computer Science.
Veness, Ng, Hutter, and Silver. 2010. Reinforcement Learning via AIXI Approximation.” Proceedings of the AAAI Conference on Artificial Intelligence.
Veness, Ng, Hutter, Uther, et al. 2010. A Monte Carlo AIXI Approximation.”
Yang-Zhao, Wang, and Ng. n.d. “A Direct Approximation of AIXI Using Logical State Abstractions.”