Adversarial learning

Statistics against Shayṭtān



General adversarial learning, where the noise is not purely random, but chosen to be the worst possible noise for you.

As renewed in fame recently by the related method of generative adversarial networks.

🏗 discuss politics implied by treating the learning as a battle with a conniving adversary as opposed to an uncaring universe, mention obvious connection with the theist neoreactionary zeitgeist. I’m sure someone has done this well in a terribly eloquent blog post, but I haven’t found one I’d want to link to yet.

Regardless of politically suggestive structure, application of game theory in the place of pure randomness is probably interesting in many areas although I don’t know most of them. Adversarial bandits is the obvious one in my world.

Miscellaney:

Tough love training

References

Abernethy, Jacob, Alekh Agarwal, Peter L Bartlett, and Alexander Rakhlin. 2009. “A Stochastic View of Optimal Regret Through Minimax Duality.” arXiv:0903.5328 [cs, Stat]. http://arxiv.org/abs/0903.5328.
Abernethy, Jacob, Peter L Bartlett, and Elad Hazan. 2011. “Blackwell Approachability and No-Regret Learning Are Equivalent.” In, 20.
Arjovsky, Martin, and Léon Bottou. 2017. “Towards Principled Methods for Training Generative Adversarial Networks.” arXiv:1701.04862 [stat], January. http://arxiv.org/abs/1701.04862.
Arjovsky, Martin, Soumith Chintala, and Léon Bottou. 2017. “Wasserstein Generative Adversarial Networks.” In International Conference on Machine Learning, 214–23. http://proceedings.mlr.press/v70/arjovsky17a.html.
Arora, Sanjeev, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. 2017. “Generalization and Equilibrium in Generative Adversarial Nets (GANs).” arXiv:1703.00573 [cs], March. http://arxiv.org/abs/1703.00573.
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. “Compressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46. http://arxiv.org/abs/1703.03208.
Bubeck, Sébastien, and Nicolò Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Vol. 5. Boston: Now. https://doi.org/10.1561/2200000024.
Bubeck, Sébastien, and Aleksandrs Slivkins. 2012. “The Best of Both Worlds: Stochastic and Adversarial Bandits.” arXiv:1202.4473 [cs], February. http://arxiv.org/abs/1202.4473.
Buckner, Cameron. 2020. “Understanding Adversarial Examples Requires a Theory of Artefacts for Deep Learning.” Nature Machine Intelligence 2 (12): 731–36. https://doi.org/10.1038/s42256-020-00266-y.
Gebhart, Thomas, Paul Schrater, and Alan Hylton. 2019. “Characterizing the Shape of Activation Space in Deep Neural Networks.” arXiv:1901.09496 [cs, Stat], January. http://arxiv.org/abs/1901.09496.
Ghosh, Arnab, Viveka Kulharia, Vinay Namboodiri, Philip H. S. Torr, and Puneet K. Dokania. 2017. “Multi-Agent Diverse Generative Adversarial Networks.” arXiv:1704.02906 [cs, Stat], April. http://arxiv.org/abs/1704.02906.
Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. 2014. “Explaining and Harnessing Adversarial Examples.” arXiv:1412.6572 [cs, Stat], December. http://arxiv.org/abs/1412.6572.
Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. NIPS’14. Cambridge, MA, USA: Curran Associates, Inc. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf.
Grünwald, Peter D, and Joseph Y Halpern. 2007. “A Game-Theoretic Analysis of Updating Sets of Probabilities.” Eprint arXiv:07113235. http://arxiv.org/abs/0711.3235.
Guo, Xin, Johnny Hong, Tianyi Lin, and Nan Yang. 2017. “Relaxed Wasserstein with Applications to GANs.” arXiv:1705.07164 [cs, Stat], May. http://arxiv.org/abs/1705.07164.
Ilyas, Andrew, Logan Engstrom, Shibani Santurkar, Brandon Tran, Dimitris Tsipras, and Aleksander Ma. 2019. “Adversarial Examples Are Not Bugs, They Are Features.” In Advances In Neural Information Processing Systems, 12.
Jetchev, Nikolay, Urs Bergmann, and Roland Vollgraf. 2016. “Texture Synthesis with Spatial Generative Adversarial Networks.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1611.08207.
Khim, Justin, Varun Jog, and Po-Ling Loh. 2016. “Computationally Efficient Influence Maximization in Stochastic and Adversarial Models: Algorithms and Analysis.” arXiv:1611.00350 [cs, Stat], November. http://arxiv.org/abs/1611.00350.
Larsen, Anders Boesen Lindbo, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. 2015. “Autoencoding Beyond Pixels Using a Learned Similarity Metric.” arXiv:1512.09300 [cs, Stat], December. http://arxiv.org/abs/1512.09300.
Poole, Ben, Alexander A. Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. 2016. “Improved Generator Objectives for GANs.” In Advances in Neural Information Processing Systems 29. http://arxiv.org/abs/1612.02780.
Radford, Alec, Luke Metz, and Soumith Chintala. 2015. “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks.” In arXiv:1511.06434 [cs]. http://arxiv.org/abs/1511.06434.
Raghunathan, Arvind U., Anoop Cherian, and Devesh K. Jha. 2019. “Game Theoretic Optimization via Gradient-Based Nikaido-Isoda Function.” arXiv:1905.05927 [cs, Math, Stat], May. http://arxiv.org/abs/1905.05927.
Vervoort, Marco R. 1996. “Blackwell games.” In Statistics, probability and game theory: Papers in honor of David Blackwell, edited by T.S. Ferguson, L.S. Shapley, and J.B. MacQueen, 369–90. Institute of Mathematical Statistics. https://doi.org/10.1214/lnms/1215453583.
Zhang, Rui, and Quanyan Zhu. 2017. “Game-Theoretic Design of Secure and Resilient Distributed Support Vector Machines with Adversaries.” arXiv:1710.04677 [cs, Stat], October. http://arxiv.org/abs/1710.04677.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.