Bandit problems, Markov decision processes, a smattering of dynamic programming, game theory, optimal control, and online learning of the solutions to such problems, esp. reinforcement learning.

Learning, where you must learn an optimal action in response to your stimulus, possibly an optimal “policy” of trying different actions over time, not just an MMSE-minimal prediction from complete data.

Comes in adversarial, Markov, and stochastic flavours, apparently, although I’ve hit the boundaries of my knowledge there.

## Pseudopolitical diversion

See clickbait bandits.

## Intros

Here’s an intro to all of machine learning through a historical tale about how one particular attempt to teach a machine (not a computer!) to play tic-tac-toe: Rodney Brooks, Machine Learning Explained. Introductions recommended by Bubeck include (Slivkins 2019; Bubeck and Cesa-Bianchi 2012; Lattimore 2020).

### Theory

- Sébastien Bubeck’s intro (complements and updates (Bubeck and Cesa-Bianchi 2012))
- Sébastien Bubeck’s review of the 2010s
- Sergey Feldman, Bandits for Recommendation Systems is an EZ-introduction.
- Elad Hazan’s origin story for online convex optimisation has just made me realise how new it is. It complements…
- Elad Hazan and Satyan Kale’s tutorial on online convex optimisation looks at this through the online convex optimization lense which I suppose is an austere and modern perspective; I would like a Venn diagram of which problems fit into this framing.
- Langford’s 2013 NIPS presentation,
*learning to interact*((Langford 2013))

## Bandits-meet-optimisation

Bubeck again: Kernel-based methods for bandit convex optimization, part 1.

## Bandits-meet-evolution

Ben Recht and Roy Frostig, Nesterov’s punctuation equilibrium:

In a landmark new paper by Salimans, Ho, Chen, and Sutskever from OpenAI, (Salimans et al. 2017) the authors show that a particular class of genetic algorithms (called Evolutionary Strategies) gives excellent performance on a variety of reinforcement learning benchmarks. As optimizers, the application of genetic algorithms raises red flags and usually causes us to close browser Windows. But fear not! As we will explain, the particular algorithm deployed happens to be a core method in optimization, and the fact that this method is successful sheds light on the peculiarities of reinforcement learning more than it does about genetic algorithms in general.

## Details

🏗

Conceptually, the base model is a one- or many-armed poker machine. You can pop coins in, and each time you do you may pull an arm; you might get rewarded. Each arm of the machine might have different returns; but the only way to find out is to play.

How do you choose optimally which arms to pull and when? How much is to work spending to find the arm with the best return on investment, given that it costs to collect more data?

This can be formalised by minimizing *regret* and defining some other terms
and what you get out is a formalized version of Skinnerian learning that you can
easily implement as an algorithm and feel that you have got some satisfyingly
optimal properties for.

The thing about whether to choose a new arm when you are on a known good one,
or to switch to another one in hope of it being better,
this is a symbolic one
and it’s called the *exploration/exploitation trade-off*.

## Multi-world testing

A setting by Microsoft: Multi-World Testing (MWT) appears to be an online learning problem that augments its data by re-using the data for offline testing:

… is a toolbox of machine learning technology for principled and efficient experimentation, plausibly applicable to most Microsoft services that interact with customers. In many scenarios, this technology is exponentially more efficient than the traditional A/B testing. The underlying research area, mature and yet very active, is known under many names: “multi-armed bandits”, “contextual bandits”, “associative reinforcement learning”, and “counterfactual evaluation”, among others.

To take an example, suppose one wants to optimize clicks on suggested news stories. To discover what works, one needs to explore over the possible news stories. Further, if the suggested news story can be chosen depending on the visitor’s profile, then one needs to explore over the possible “policies” that map profiles to news stories (and there are exponentially more “policies” than news stories!). Traditional ML fails at this because it does not explore. Whereas MWT allows you to explore continuously, and optimize your decisions using this exploration data.

It has a better introduction here, by John Langford: (cf (Alekh Agarwal et al. 2015).)

Removing the credit assignment problem from reinforcement learning yields the Contextual Bandit setting which we know is generically solvable in the same manner as common supervised learning problems. I know of about a half-dozen real-world successful contextual bandit applications typically requiring the cooperation of engineers and deeply knowledgeable data scientists.

Can we make this dramatically easier? We need a system that explores over appropriate choices with logging of features, actions, probabilities of actions, and outcomes. These must then be fed into an appropriate learning algorithm which trains a policy and then deploys the policy at the point of decision. Naturally, this is what we’ve done and now it can be used by anyone. This drops the barrier to use down to: “Do you have permissions? And do you have a reasonable idea of what a good feature is?”

A key foundational idea is Multiworld Testing: the capability to evaluate large numbers of policies mapping features to action in a manner exponentially more efficient than standard A/B testing. This is used pervasively in the Contextual Bandit literature and you can see it in action for the system we’ve made at Microsoft Research.

## Reinforcement learning

## Markov decision problems

See POMDP.

## Connection to graphical models

See Levine (2018).

## Practicalities

Vowpal Wabbit does contextual bandit learning.:

VW contains a contextual bandit module which allows you to optimize a predictor based on already collected contextual bandit data. In other words, the module does not handle the exploration issue, it assumes it can only use the currently available data previously collected from some “exploration” policy.

## Sequential surrogate interactive model optimisation

Not what I think of in reference to bandit problems, but it and many other hyperparameter
optimization-type problems have an RL interpretation apparently.
That is, you can use RL to learn the hyper parameters of your deep learning model.
(This is not the same as deep learning *of* RL policies.)
See Sequential surrogate model optimisation.

## Incoming

- Algorithms for Decision Making: Decision making, in the sense of reinforcement learning

## References

*Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’17*, 687–96. Halifax, NS, Canada: ACM Press.

*PMLR*, 126–35.

*Information and Control*4 (4): 346–49.

*arXiv:2006.05604 [Cs, Math, Stat]*, June.

*Journal of Machine Learning Research*14: 3207–60.

*arXiv:1606.01540 [Cs]*, June.

*Regret analysis of stochastic and nonstochastic multi-armed bandit problems*. Vol. 5. Boston: Now.

*Theoretical Computer Science*412: 1832–52.

*arXiv:1202.4473 [Cs]*, February.

*Prediction, Learning, and Games*. Cambridge ; New York: Cambridge University Press.

*Annual Review of Statistics and Its Application*7 (1): 279–301.

*Encyclopedia of Cognitve Science*.

*Philosophical Transactions of the Royal Society B: Biological Sciences*375 (1803): 20190502.

*Physical Review Letters*112 (5): 050602.

*Advances in Neural Information Processing Systems*, 345–52.

*arXiv:1502.07943 [Cs, Stat]*, February.

*Journal of Artifical Intelligence Research*4 (April).

*Commun. ACM*59 (8): 12–14.

*arXiv:1602.02722 [Cs, Stat]*, February.

*Bandit Algorithms*.

*arXiv:1805.00909 [Cs, Stat]*, May.

*Proceedings of the 24th International World Wide Web Conference (WWW’14), Companion Volume*. ACM – Association for Computing Machinery.

*Proceedings of the Fourth International Conference on Web Search and Web Data Mining (WSDM-11)*, 297–306.

*arXiv:1603.06560 [Cs, Stat]*, March.

*The Journal of Machine Learning Research*18 (1): 6765–6816.

*arXiv:1803.07055 [Cs, Math, Stat]*, March.

*arXiv:1702.08360 [Cs]*, February.

*arXiv:1610.01945 [Cs, Stat]*, October.

*Naval Research Logistics (NRL)*56 (3): 239–49.

*Journal of Artificial Intelligence Research*23 (1): 1–40.

*arXiv:1703.03864 [Cs, Stat]*, March.

*Algorithmic Learning Theory*, edited by José L. Balcázar, Philip M. Long, and Frank Stephan, 348–62. Lecture Notes in Computer Science 4264. Springer Berlin Heidelberg.

*Handbook of Learning and Approximate Dynamic Programming*. Vol. 2. John Wiley & Sons.

*Artificial Intelligence*299 (October): 103535.

*Foundations and Trends® in Machine Learning*.

*arXiv:1509.00130 [Cs, Math, Stat]*, August.

*Advances in Neural Information Processing Systems 23 (NIPS-10)*, 2217–25.

*arXiv:1811.02672 [Cs, Stat]*, November.

*Reinforcement Learning*. Cambridge, Mass.: MIT Press.

*Advances in Neural Information Processing Systems*, 1057–63.

*Proceedings of the 24th International Conference on World Wide Web - WWW ’15 Companion*, 939–41. Florence, Italy: ACM Press.

## No comments yet. Why not leave one?