Adversarial learning

Statistics against Shayṭtān

October 7, 2016 — February 21, 2022

game theory
Figure 1

Adversarial learning, where the noise is not purely random, but chosen to be the worst possible noise for you (subject to some rules of the game). This is in contrast to classic machine learning and statistics where the noise is purely random; Tyche is not “out to get you”.

As renewed in fame recently by the related (?) method of generative adversarial networks (although much older.)

The associated concept in normal human experience is Goodhardt’s law, which tells us that “people game the targets you set for them.”

Figure 2

🏗 discuss politics implied by treating the learning as a battle with a conniving adversary as opposed to an uncaringly random universe. I’m sure someone has done this well in a terribly eloquent blog post, but I haven’t found one I’d want to link to yet.

The toolset of adversarial techniques is broad. Game theory is an important one, but also computational complexity theory (how hard is to find adversarial inputs, or to learn despite them?) and lots of functional analysis and optimisation theory. Surely much other stuff I do not know because this is not really my field.

Applications are broad too — improving ML but also infosec, risk management etc.

1 Incoming

Adversarial attacks can be terrorism or freedom-fighting, depending on the pitch, natch: From data strikes to data poisoning, how consumers can take back control from corporations.

Figure 3: Tough love training

2 References

Abernethy, Agarwal, Bartlett, et al. 2009. A Stochastic View of Optimal Regret Through Minimax Duality.” arXiv:0903.5328 [Cs, Stat].
Abernethy, Bartlett, and Hazan. 2011. “Blackwell Approachability and No-Regret Learning Are Equivalent.” In.
Arjovsky, and Bottou. 2017. Towards Principled Methods for Training Generative Adversarial Networks.” arXiv:1701.04862 [Stat].
Arjovsky, Chintala, and Bottou. 2017. Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning.
Arora, Ge, Liang, et al. 2017. Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [Cs].
Bora, Jalal, Price, et al. 2017. Compressed Sensing Using Generative Models.” In International Conference on Machine Learning.
Bubeck, and Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems.
Bubeck, and Slivkins. 2012. The Best of Both Worlds: Stochastic and Adversarial Bandits.” arXiv:1202.4473 [Cs].
Buckner. 2020. Understanding Adversarial Examples Requires a Theory of Artefacts for Deep Learning.” Nature Machine Intelligence.
Gebhart, Schrater, and Hylton. 2019. Characterizing the Shape of Activation Space in Deep Neural Networks.” arXiv:1901.09496 [Cs, Stat].
Ghosh, Kulharia, Namboodiri, et al. 2017. Multi-Agent Diverse Generative Adversarial Networks.” arXiv:1704.02906 [Cs, Stat].
Goodfellow, Ian, Pouget-Abadie, Mirza, et al. 2014. Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27. NIPS’14.
Goodfellow, Ian J., Shlens, and Szegedy. 2014. Explaining and Harnessing Adversarial Examples.” arXiv:1412.6572 [Cs, Stat].
Grünwald, Peter D. 2023. The e-Posterior.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
Grünwald, Peter D., and Dawid. 2004. Game Theory, Maximum Entropy, Minimum Discrepancy and Robust Bayesian Decision Theory.” The Annals of Statistics.
Grünwald, Peter D, and Halpern. 2007. A Game-Theoretic Analysis of Updating Sets of Probabilities.”
Guo, Hong, Lin, et al. 2017. Relaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [Cs, Stat].
Huszár. 2017. Variational Inference Using Implicit Distributions.”
Ilyas, Engstrom, Santurkar, et al. 2019. “Adversarial Examples Are Not Bugs, They Are Features.” In Advances In Neural Information Processing Systems.
Jetchev, Bergmann, and Vollgraf. 2016. Texture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29.
Karaletsos. 2016. Adversarial Message Passing For Graphical Models.”
Khim, Jog, and Loh. 2016. Computationally Efficient Influence Maximization in Stochastic and Adversarial Models: Algorithms and Analysis.” arXiv:1611.00350 [Cs, Stat].
Larsen, Sønderby, Larochelle, et al. 2015. Autoencoding Beyond Pixels Using a Learned Similarity Metric.” arXiv:1512.09300 [Cs, Stat].
Linial. 1994. Chapter 38 Game-Theoretic Aspects of Computing.” In Handbook of Game Theory with Economic Applications.
Ohsawa. 2021. Unbiased Self-Play.” arXiv:2106.03007 [Cs, Econ, Stat].
Poole, Alemi, Sohl-Dickstein, et al. 2016. Improved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29.
Radford, Metz, and Chintala. 2015. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In arXiv:1511.06434 [Cs].
Raghunathan, Cherian, and Jha. 2019. Game Theoretic Optimization via Gradient-Based Nikaido-Isoda Function.” arXiv:1905.05927 [Cs, Math, Stat].
Ramdas, Grünwald, Vovk, et al. 2023. Game-Theoretic Statistics and Safe Anytime-Valid Inference.” Statistical Science.
Sato, Akiyama, and Farmer. 2002. Chaos in Learning a Simple Two-Person Game.” Proceedings of the National Academy of Sciences.
Tiao, Bonilla, and Ramos. 2018. Cycle-Consistent Adversarial Learning as Approximate Bayesian Inference.”
Tran, Ranganath, and Blei. 2017. Hierarchical Implicit Models and Likelihood-Free Variational Inference.” In Advances in Neural Information Processing Systems 30.
Vervoort. 1996. Blackwell Games.” In Statistics, Probability and Game Theory: Papers in Honor of David Blackwell.
Ye, Du, and Yao. 2023. Adversarially Contrastive Estimation of Conditional Neural Processes.”
Zhang, and Zhu. 2017. Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries.” arXiv:1710.04677 [Cs, Stat].