Probability, Cox-style

2023-10-16 — 2026-01-07

Wherein an epistemic foundation of probability is presented, wherein Cox’s desiderata are shown to yield the product and sum rules and Bayes’ rule is shown to follow, contrasted with Kolmogorov’s measure approach.

algebra
Bayes
functional analysis
probability
Figure 1

Alternative foundation of probability, but “derived from epistemic rather than aleatoric” concerns. Or so I am told.

Cox (1961) provides a logical derivation of probability as the unique calculus for plausible reasoning, contrasting with the Kolmogorov axiomatic foundation commonly taught in measure-theoretic probability courses. Rather than starting with set-based axioms about measures on a σ-algebra, Cox derives the rules of probability from desiderata about how degrees of belief should behave when reasoning under uncertainty. This positions probability as an extension of classical logic to plausible inference.

Kolmogorov’s axioms define probability in terms of a measure on a sample space: non-negativity, normalization (total probability 1), and (countable) additivity for disjoint events. These axioms are interpretation-agnostic and extremely powerful for constructing the mathematical probability theory used across analysis, stochastic processes, and statistics. Kolmogorov did not attempt to justify why these rules should govern uncertain reasoning; he simply took them as the foundation of the mathematical system we call probability.

Cox’s approach, by contrast, starts from qualitative principles about plausibility: beliefs should be orderable; the plausibility of a conjunction should depend only on the plausibility of each component given the other; and the plausibility of a negation should be a function of the plausibility of the original proposition. From these intuitively reasonable constraints on any rational measure of belief, Cox shows that the only consistent numerical representation is one that obeys the standard product and sum rules of probability. In effect, probability emerges as the consistent extension of Boolean logic to handle uncertainty quantitatively.

Cox’s foundation justifies probability theory as a logic of inference—it applies to propositions about any uncertain system, whether or not there is an underlying frequentist interpretation or repeatable experiment.

Where Kolmogorov’s framework is silent on interpretation and grounded in measure theory, Cox’s theorem links probability directly to rational belief and inference, making Bayes’ rule a consequence of consistency requirements.

Cox’s formulation (which is nearly contemporary with Kolmogorov’s) is a little less rigorous in measure-theoretic terms: it assumes real-valued, continuous measures and a Boolean logic structure for propositions, and traditionally it doesn’t address issues like countable additivity or infinite domains as rigorously as Kolmogorov’s measure theory. I should look into how to reconcile these aspects better.

1 Incoming

Cox’s theorem – Rising Entropy (“Cox’s famous theorem says that if your way of reasoning is not in some sense isomorphic to Bayesianism, then you are violating one of the following ten basic principles. I’ll derive the theorem in this post.”)

2 References

Climenhaga. 2020. The Structure of Epistemic Probabilities.” Philosophical Studies.
Cox. 1946. Probability, Frequency, and Reasonable Expectation.” American Journal of Physics.
———. 1961. The Algebra of Probable Inference.
Jaynes. 1963. Review: The Algebra of Probable Inference.” American Journal of Physics.
Mečíř. 2020. Foundations for Conditional Probability.”
Rényi. 1970. Foundations of Probability.
Taraldsen. 2019. Conditional Probability in Rényi Spaces.”
Van Horn. 2003. Constructing a Logic of Plausible Inference: A Guide to Cox’s Theorem.” International Journal of Approximate Reasoning.