Game complexity
2019-06-07 — 2026-04-29
Wherein the Nash equilibrium, whose existence is guaranteed by fixed-point arguments, is established as PPAD-complete, while correlated equilibrium is seen to admit a linear-programming solution.
The complexity of finding game theory solutions, as seen in cooperation, mechanism design, adversarial training, and so on.
A question in classical game theory: are the equilibria the classical theory promises us actually computable? Finding a Nash equilibrium of a general two-player game is PPAD-complete (Daskalakis, Goldberg, and Papadimitriou 2009; Chen and Deng 2006), which means no rational agent could plausibly compute one “fast enough” on a non-trivial game even though Nash’s theorem guarantees existence. This is the stylised fact behind a lot of bounded-rationality and learning-based approaches; see the broader landscape of fixes and adjacent fields.
1 Nash equilibrium
The basic problem: given payoff matrices \(A, B\) for a two-player game, find mixed strategies \(\sigma, \tau\) such that neither player can profitably deviate. Nash 1950 says they exist; the computational problem is to find them.
Tractable special cases:
- Zero-sum games (\(B = -A\)): linear programming. (Von Neumann’s minimax theorem; the case where most early algorithmic work happened.)
- Known support: if the support of the equilibrium is given, the rest reduces to linear programming.
- Various structured cases — graphical games with bounded treewidth, certain symmetric games — admit polynomial algorithms.
The general case is PPAD-complete (Daskalakis, Goldberg, and Papadimitriou 2009; Chen and Deng 2006). PPAD — Polynomial Parity Argument on Directed graphs — is a complexity class of search problems guaranteed to have a solution by parity or fixed-point arguments. The canonical example is Brouwer’s fixed-point theorem: any continuous map from a compact convex set to itself has a fixed point, and finding one is in PPAD.
PPAD sits between P and NP-complete in a particular way: the search problem has a guaranteed solution (so it is not NP-complete in the same way as SAT, where the question is whether any solution exists), but no known polynomial algorithm finds it. Nash PPAD-completeness made Nash equilibrium the canonical natural problem in this class.
2 Concisely represented games
The PPAD-completeness result above is stated for games given by their explicit payoff matrices, which is exponentially large in the number of players. Many practical games are concisely represented — graphical games, action-graph games, congestion games — where the description size is polynomial in the number of agents. The complexity story changes (gets better, I presume?) under concise representation (Schoenebeck and Vadhan 2009).
3 Approximate Nash
If we settle for an \(\epsilon\)-equilibrium — no player gains more than \(\epsilon\) by deviating, instead of literally zero gain — the problem gets easier. Lipton, Markakis, and Mehta (2003) gave an algorithm running in \(n^{O(\log n / \epsilon^2)}\) time for fixed \(\epsilon\). That running time is quasi-polynomial: faster than the exponential \(2^n\) blow-up but slower than any polynomial \(n^c\). The trick was to search only over small supports — small subsets of pure strategies that might appear in the equilibrium — and check each candidate via linear programming.
Whether a fully polynomial-time approximation scheme exists is, as far as I know, still open; PPAD-hardness blocks it for arbitrarily tight \(\epsilon\).
Continuous local search (CLS) (Daskalakis and Papadimitriou 2011) is a related complexity class for problems where iterative local improvement of a continuous candidate converges to a local optimum — gradient descent is the prototype. Some natural game-theoretic problems probably have this flavour.
5 Algorithmic mechanism design
Designing the rules of the game — auctions, pricing, voting mechanisms — has its own complexity story. Some problems are NP-hard; some reduce to standard optimisation. The general programme treats mechanism design as algorithm design with an extra incentive-compatibility constraint.
6 Foundations under computability
When agents are themselves computational and reason about each other, classical game theory’s assumptions start to wobble. Two threads worth knowing about:
Fallenstein, Taylor, and Christiano (2015) develops reflective oracles — a formalism in which agents can in some sense reason about each other’s reasoning, addressing the diagonal/self-reference issues inherent in agents simulating each other. Useful when we want classical-style equilibrium theory to apply to AI agents reasoning about other AI agents.
Wyeth et al. (2025) extends this into limit-computable equilibria for extensive-form games, connecting to the Bayesian “grain of truth” tradition of learning in unknown games under computational constraints.
7 So do we even get to Nash equilibrium anyway?
If equilibria are intractable to compute, real agents probably do not reach them. Observed strategic play is mostly off-equilibrium, approximate, or learned through interaction — not the closed-form predictions of classical theory. This is the load-bearing observation behind behavioural game theory and learning-based approaches like multi-agent RL.
Computational complexity matters even outside computer science — see Aaronson (2011) for the careful version of this argument.
Kaznatcheev (2020) makes a related observation from biology: frequency-dependent selection in evolutionary dynamics is exponentially more powerful than frequency-independent selection, so the computational reach of evolution itself is shaped by game-theoretic structure.
8 Pointers
How long until we approach Nash equilibrium (also covers Aumann’s correlated equilibrium).
- Fortnow and Gasarch, The complexity of Nash equilibrium
- What is PPAD
