Game complexity

2019-06-07 — 2026-04-29

Wherein the Nash equilibrium, whose existence is guaranteed by fixed-point arguments, is established as PPAD-complete, while correlated equilibrium is seen to admit a linear-programming solution.

compsci
economics
game theory
incentive mechanisms
markets
Figure 1

The complexity of finding game theory solutions, as seen in cooperation, mechanism design, adversarial training, and so on.

A question in classical game theory: are the equilibria the classical theory promises us actually computable? Finding a Nash equilibrium of a general two-player game is PPAD-complete (Daskalakis, Goldberg, and Papadimitriou 2009; Chen and Deng 2006), which means no rational agent could plausibly compute one “fast enough” on a non-trivial game even though Nash’s theorem guarantees existence. This is the stylised fact behind a lot of bounded-rationality and learning-based approaches; see the broader landscape of fixes and adjacent fields.

1 Nash equilibrium

The basic problem: given payoff matrices \(A, B\) for a two-player game, find mixed strategies \(\sigma, \tau\) such that neither player can profitably deviate. Nash 1950 says they exist; the computational problem is to find them.

Tractable special cases:

  • Zero-sum games (\(B = -A\)): linear programming. (Von Neumann’s minimax theorem; the case where most early algorithmic work happened.)
  • Known support: if the support of the equilibrium is given, the rest reduces to linear programming.
  • Various structured cases — graphical games with bounded treewidth, certain symmetric games — admit polynomial algorithms.

The general case is PPAD-complete (Daskalakis, Goldberg, and Papadimitriou 2009; Chen and Deng 2006). PPAD — Polynomial Parity Argument on Directed graphs — is a complexity class of search problems guaranteed to have a solution by parity or fixed-point arguments. The canonical example is Brouwer’s fixed-point theorem: any continuous map from a compact convex set to itself has a fixed point, and finding one is in PPAD.

PPAD sits between P and NP-complete in a particular way: the search problem has a guaranteed solution (so it is not NP-complete in the same way as SAT, where the question is whether any solution exists), but no known polynomial algorithm finds it. Nash PPAD-completeness made Nash equilibrium the canonical natural problem in this class.

2 Concisely represented games

The PPAD-completeness result above is stated for games given by their explicit payoff matrices, which is exponentially large in the number of players. Many practical games are concisely represented — graphical games, action-graph games, congestion games — where the description size is polynomial in the number of agents. The complexity story changes (gets better, I presume?) under concise representation (Schoenebeck and Vadhan 2009).

3 Approximate Nash

If we settle for an \(\epsilon\)-equilibrium — no player gains more than \(\epsilon\) by deviating, instead of literally zero gain — the problem gets easier. Lipton, Markakis, and Mehta (2003) gave an algorithm running in \(n^{O(\log n / \epsilon^2)}\) time for fixed \(\epsilon\). That running time is quasi-polynomial: faster than the exponential \(2^n\) blow-up but slower than any polynomial \(n^c\). The trick was to search only over small supports — small subsets of pure strategies that might appear in the equilibrium — and check each candidate via linear programming.

Whether a fully polynomial-time approximation scheme exists is, as far as I know, still open; PPAD-hardness blocks it for arbitrarily tight \(\epsilon\).

Continuous local search (CLS) (Daskalakis and Papadimitriou 2011) is a related complexity class for problems where iterative local improvement of a continuous candidate converges to a local optimum — gradient descent is the prototype. Some natural game-theoretic problems probably have this flavour.

4 Correlated equilibrium

Aumann’s correlated equilibrium generalises Nash by letting players coordinate via a public randomising device.

Two cars approach an intersection. Both want to go (good); if both go they crash (very bad); if both stop they waste time (mediocre). A traffic light is a coin flip with private signals: half the time it tells A “go” and B “stop”, half the time vice versa. Each player follows her signal because given what she was told it is her best response — A told “go” knows B was told “stop”, so going is fine; A told “stop” knows B was told “go”, so stopping is fine. The joint distribution over (go, stop) and (stop, go) is a correlated equilibrium.

It needs a public correlation device — a traffic light, a referee, a cryptographic commitment scheme — but once that is in place, the computational picture is much friendlier than Nash: a correlated equilibrium can be found by linear programming in polynomial time.

Coarse correlated equilibrium relaxes further. Each player only has to want to commit to following the signal before learning what it will be (rather than after, which is correlated equilibrium). Coarse is what natural no-regret learning dynamics converge to in the limit, and is cheaper still to compute. These names are bad and confusing.

Classical theory privileges Nash, the hardest of these to compute. Natural learning dynamics converge to coarse correlated equilibrium, the cheapest. Whatever observed strategic play tracks, it is more likely to be the latter than the former.

For market equilibria specifically — also computable as fixed points, also generally hard — see markets complexity.

5 Algorithmic mechanism design

Designing the rules of the game — auctions, pricing, voting mechanisms — has its own complexity story. Some problems are NP-hard; some reduce to standard optimisation. The general programme treats mechanism design as algorithm design with an extra incentive-compatibility constraint.

6 Foundations under computability

When agents are themselves computational and reason about each other, classical game theory’s assumptions start to wobble. Two threads worth knowing about:

Fallenstein, Taylor, and Christiano (2015) develops reflective oracles — a formalism in which agents can in some sense reason about each other’s reasoning, addressing the diagonal/self-reference issues inherent in agents simulating each other. Useful when we want classical-style equilibrium theory to apply to AI agents reasoning about other AI agents.

Wyeth et al. (2025) extends this into limit-computable equilibria for extensive-form games, connecting to the Bayesian “grain of truth” tradition of learning in unknown games under computational constraints.

7 So do we even get to Nash equilibrium anyway?

If equilibria are intractable to compute, real agents probably do not reach them. Observed strategic play is mostly off-equilibrium, approximate, or learned through interaction — not the closed-form predictions of classical theory. This is the load-bearing observation behind behavioural game theory and learning-based approaches like multi-agent RL.

Computational complexity matters even outside computer science — see Aaronson (2011) for the careful version of this argument.

Kaznatcheev (2020) makes a related observation from biology: frequency-dependent selection in evolutionary dynamics is exponentially more powerful than frequency-independent selection, so the computational reach of evolution itself is shaped by game-theoretic structure.

8 Pointers

How long until we approach Nash equilibrium (also covers Aumann’s correlated equilibrium).

9 References

Aaronson. 2011. Why Philosophers Should Care About Computational Complexity.”
Axtell. 2005. The Complexity of Exchange.” The Economic Journal.
Chen, and Deng. 2006. Settling the Complexity of Two-Player Nash Equilibrium.” In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS’06).
Daskalakis, Goldberg, and Papadimitriou. 2009. The Complexity of Computing a Nash Equilibrium.” SIAM Journal on Computing.
Daskalakis, and Papadimitriou. 2011. Continuous Local Search.” In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms. Proceedings.
Fallenstein, Taylor, and Christiano. 2015. Reflective Oracles: A Foundation for Classical Game Theory.”
Kaznatcheev. 2020. Evolution Is Exponentially More Powerful with Frequency-Dependent Selection.” bioRxiv.
Schoenebeck, and Vadhan. 2009. “The Computational Complexity of Nash Equilibria in Concisely Represented Games.”
Wyeth, Hutter, Leike, et al. 2025. Limit-Computable Grains of Truth for Arbitrary Computable Extensive-Form (Un)Known Games.”
Ye. 2008. A Path to the Arrow–Debreu Competitive Market Equilibrium.” Mathematical Programming.