General adversarial learning, where the noise is not purely random, but chosen to be the *worst possible noise for you*.

As renewed in fame recently by the related method of generative adversarial networks.

🏗 discuss politics implied by treating the learning as
a battle with a conniving adversary as opposed to an uncaring universe,
mention obvious connection with the theist neoreactionary *zeitgeist*.
I’m sure someone has done this well in a terribly eloquent blog post,
but I haven’t found one I’d want to link to yet.

Regardless of politically suggestive structure, application of [game theory]({{< relref "game theory.html" >}}) in the place of pure randomness is probably interesting in many areas although I don’t know most of them. Advesarial bandits is the obvious one in my world.

Abernethy, Jacob, Alekh Agarwal, Peter L Bartlett, and Alexander Rakhlin. 2009. “A Stochastic View of Optimal Regret Through Minimax Duality.” http://arxiv.org/abs/0903.5328.

Arjovsky, Martin, and Léon Bottou. 2017. “Towards Principled Methods for Training Generative Adversarial Networks,” January. http://arxiv.org/abs/1701.04862.

Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In *International Conference on Machine Learning*, 214–23. http://proceedings.mlr.press/v70/arjovsky17a.html.

Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. “Generalization and Equilibrium in Generative Adversarial Nets (GANs),” March. http://arxiv.org/abs/1703.00573.

Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. “Compressed Sensing Using Generative Models.” In *International Conference on Machine Learning*, 537–46. http://arxiv.org/abs/1703.03208.

Bubeck, Sébastien, and Nicolò Cesa-Bianchi. 2012. *Regret Analysis of Stochastic and Nonstochastic Multi-Armed Bandit Problems*. Vol. 5. Boston: Now. https://doi.org/10.1561/2200000024.

Bubeck, Sébastien, and Aleksandrs Slivkins. 2012. “The Best of Both Worlds: Stochastic and Adversarial Bandits,” February. http://arxiv.org/abs/1202.4473.

Gebhart, Thomas, Paul Schrater, and Alan Hylton. 2019. “Characterizing the Shape of Activation Space in Deep Neural Networks,” January. http://arxiv.org/abs/1901.09496.

Ghosh, Arnab, Viveka Kulharia, Vinay Namboodiri, Philip H. S. Torr, and Puneet K. Dokania. 2017. “Multi-Agent Diverse Generative Adversarial Networks,” April. http://arxiv.org/abs/1704.02906.

Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples,” December. http://arxiv.org/abs/1412.6572.

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In *Advances in Neural Information Processing Systems 27*, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.

Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. “Relaxed Wasserstein with Applications to GANs,” May. http://arxiv.org/abs/1705.07164.

Jetchev, Nikolay, Urs Bergmann, and Roland Vollgraf. 2016. “Texture Synthesis with Spatial Generative Adversarial Networks.” In *Advances in Neural Information Processing Systems 29*. http://arxiv.org/abs/1611.08207.

Khim, Justin, Varun Jog, and Po-Ling Loh. 2016. “Computationally Efficient Influence Maximization in Stochastic and Adversarial Models: Algorithms and Analysis,” November. http://arxiv.org/abs/1611.00350.

Larsen, Anders Boesen Lindbo, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2015. “Autoencoding Beyond Pixels Using a Learned Similarity Metric,” December. http://arxiv.org/abs/1512.09300.

Poole, Ben, Alexander A. Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. 2016. “Improved Generator Objectives for GANs.” In *Advances in Neural Information Processing Systems 29*. http://arxiv.org/abs/1612.02780.

Radford, Alec, Luke Metz, and Soumith Chintala. 2015. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In. http://arxiv.org/abs/1511.06434.

Raghunathan, Arvind U., Anoop Cherian, and Devesh K. Jha. 2019. “Game Theoretic Optimization via Gradient-Based Nikaido-Isoda Function,” May. http://arxiv.org/abs/1905.05927.

Zhang, Rui, and Quanyan Zhu. 2017. “Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries,” October. http://arxiv.org/abs/1710.04677.