Welcome to the probability inequality mines!
When something in your process (measurement, estimation) means that you can be pretty sure that a whole bunch of your stuff is particularly likely to be somewhere in particular, e.g. being 80% sure I am only 20% wrong.
As undergraduates, we run into central limit theorems, but there are many more diverse ways we can keep track of our probability, or at least most of it. This idea is a basic workhorse in univariate probability and turns out to be yet more essential in multivariate matrix probability, as seen in matrix factorization, compressive sensing, PAC-bounds and suchlike.
Background
Overviews include
- Roman Vershynin’s High-Dimensional Probability: An Introduction with Applications in Data Science (Vershynin 2018)
- Thomas Lumley’s super simple intro to chaining and controlling maxima.
- Dasgupta, Asymptotic Theory of Statistics and Probability (DasGupta 2008) is practical, and despite its name, introduces some basic non-asymptotic inequalities
- Raginsky and Sason, Concentration of Measure Inequalities in Information Theory, Communications and Coding (Raginsky and Sason 2012)
- Tropp, An Introduction to Matrix Concentration Inequalities (Tropp 2015) high-dimensional data! free!
- Boucheron, Bousquet & Lugosi, Concentration inequalities (Boucheron, Bousquet, and Lugosi 2004) (Clear and brisk but missing some newer stuff)
- Boucheron, Lugosi & Massart, Concentration inequalities: a nonasymptotic theory of independence (Boucheron, Lugosi, and Massart 2013). Haven’t read it yet.
- Massart, Concentration inequalities and model section (Massart 2007). Clear, focussed, but brusque. Depressingly, by being applied it also demonstrates the limitations of its chosen techniques, which seem sweet in application but bitter in the required assumptions. (Massart 2000) is an earlier draft.
- Lugosi’s Concentration-of-measure Lecture notes as good, especially for treatment of Efron-Stein inequalities. This taxonomy is used in his Combinatorial statistics notes.
- Terry Tao’s lecture notes
- Divergence in everything: erasure divergence and concentration inequalities
- Talagrand’s opus that is commonly credited with kicking off the modern fad, particularly the part due to the chaining method. (Talagrand 1995)
- Luca Trevisan wrote an example-driven explanation of Talagrand generic chaining.
Markov
For any nonnegative random variable and Corollary: if is a strictly monotonically increasing non-negative-valued function then for any random variable and real number
Chebychev
A corollary of Markov’s bound is with is Chebyshev’s: if is an arbitrary random variable and then More generally taking for any we have We can choose to optimize the obtained upper bound for the problem in hand.
Hoeffding
Another one about sums of RVs, in particular for bounded RVs with potentially different bounds.
Let be independent bounded random variables, in the sense that for all Define the sum of these variables Let be the expected value of
Hoeffding’s inequality states that for any ,
and
For example, consider independent random variables taking values in . For and Hoeffding’s inequality becomes:
Cool, except my variates are rarely bounded. What do I do then? Probably Chernoff or Bernstein bounds.
Chernoff
Taking where , for any random variable , and any , we have is a free parameter we choose to make the bound as tight as possible.
That’s what I was taught in class. The lazy (e.g., me) might not notice that a useful bound for sums of RVs can be derived from this result because this tells us about the Laplace transform (i.e. Moment-generating function), which tells us about sums.
For independent random variables , let . We have If are i.i.d., then
Bernstein inequality
Another one about bounded RVs. For independent zero-mean random variables with
Efron-Stein
Do these results follow from Stein’s method? They have the right form, but the derivation is not clear. In fact, where did I get these results from? I forgot to provide a citation for what I was cribbing from.
Let be a real-valued measurable function of n variables. Efron-Stein inequalities concern the difference between the random variable and its expected value when are arbitrary independent random variables.
Define for the expected value with respect to the variable , that is, Then
Now, let be an independent copy of . Alternatively, Nothing here seems to constrain the variables here to be real-valued, merely the function , but apparently they do not work for matrix variables as written — we need to see matrix Efron-Stein results for that.
Khintchine
Let us copy from wikipedia:
Heuristically: if we pick complex numbers , and add them together, each multiplied by jointly independent random signs , then the expected value of the sum’s magnitude is close to .
Let i.i.d. random variables with for , i.e., a sequence with Rademacher distribution. Let and let . Then
for some constants . It is a simple matter to see that when , and when .
Empirical process theory
🏗
References
Adler, Robert J. 2010. The Geometry of Random Fields.
Adler, Robert J., and Taylor. 2007.
Random Fields and Geometry. Springer Monographs in Mathematics 115.
Aubrun, and Nechita. 2011.
“The Multiplicative Property Characterizes and Norms.” Confluentes Mathematici.
Barron, Birgé, and Massart. 1999.
“Risk Bounds for Model Selection via Penalization.” Probability Theory and Related Fields.
Bellec, Lecué, and Tsybakov. 2017.
“Towards the Study of Least Squares Estimators with Convex Penalty.” arXiv:1701.09120 [Math, Stat].
Berend, Harremoës, and Kontorovich. 2012.
“Minimum KL-Divergence on Complements of L₁ Balls.” arXiv:1206.6544 [Cs, Math].
Boucheron, Bousquet, and Lugosi. 2004.
“Concentration Inequalities.” In
Advanced Lectures in Machine Learning.
———. 2013. Concentration Inequalities: A Nonasymptotic Theory of Independence.
Bousquet, Boucheron, and Lugosi. 2004.
“Introduction to Statistical Learning Theory.” In
Advanced Lectures on Machine Learning. Lecture Notes in Computer Science 3176.
Bühlmann, and van de Geer. 2011. Statistics for High-Dimensional Data: Methods, Theory and Applications.
Candès, and Recht. 2009.
“Exact Matrix Completion via Convex Optimization.” Foundations of Computational Mathematics.
Carrell, Gong, Shetty, et al. 2025.
“Low-Rank Thinning.”
Dasarathy, Shah, Bhaskar, et al. 2013.
“Sketching Sparse Matrices.” arXiv:1303.6544 [Cs, Math].
Dasgupta. 2000.
“Experiments with Random Projection.” In
Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence. UAI’00.
DasGupta. 2008.
Asymptotic Theory of Statistics and Probability. Springer Texts in Statistics.
Dasgupta, Hsu, and Verma. 2006.
“A Concentration Theorem for Projections.” In
Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence. UAI’06.
Dubhashi, and Panconesi. 2009. Concentration of Measure for the Analysis of Randomized Algorithms.
Gozlan, and Léonard. 2010.
“Transport Inequalities. A Survey.” arXiv:1003.3852 [Math].
Gross, David, Liu, Flammia, et al. 2010.
“Quantum State Tomography via Compressed Sensing.” Physical Review Letters.
Koltchinskii. 2011.
Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Lecture Notes in Mathematics École d’Été de Probabilités de Saint-Flour 2033.
Kuznetsov, and Mohri. 2014.
“Generalization Bounds for Time Series Prediction with Non-Stationary Processes.” In
Algorithmic Learning Theory. Lecture Notes in Computer Science.
Lahiri, Gao, and Ganguli. 2016.
“Random Projections of Random Manifolds.” arXiv:1607.04331 [Cs, q-Bio, Stat].
Liggett. 2010.
“Stochastic Models for Large Interacting Systems and Related Correlation Inequalities.” Proceedings of the National Academy of Sciences of the United States of America.
Massart. 2000.
“Some Applications of Concentration Inequalities to Statistics.” In
Annales de La Faculté Des Sciences de Toulouse: Mathématiques.
Raginsky, and Sason. 2012.
“Concentration of Measure Inequalities in Information Theory, Communications and Coding.” Foundations and Trends in Communications and Information Theory.
Reynaud-Bouret, and Roy. 2007. “Some Non Asymptotic Tail Estimates for Hawkes Processes.” Bulletin of the Belgian Mathematical Society - Simon Stevin.
———. 1996.
“A New Look at Independence.” The Annals of Probability.
———. 2002. “On Hoeffdoing’s Inequality for Dependent Random Variables.” In Empirical Process Techniques for Dependent Data.
van Handel. 2017.
“Structured Random Matrices.” In
Convexity and Concentration. The IMA Volumes in Mathematics and Its Applications.
———. 2015.
“Estimation in High Dimensions: A Geometric Perspective.” In
Sampling Theory, a Renaissance: Compressive Sensing and Other Developments. Applied and Numerical Harmonic Analysis.
Zhang, and Chen. 2020.
“Concentration Inequalities for Statistical Inference.” arXiv:2011.02258 [Cs, Math, Stat].