Sequential experiments
Especially multiple sequential experiments
August 4, 2021 — August 4, 2021
functional analysis
how do science
model selection
optimization
statmech
surrogate
I am running lots of experiments and want to waste as little time as possible on ones that will give me negative results, while attaining as much certainty as possible about the positive results.
1 One experiment
2 Many experiments
Alex Birkett’s article is OK on explaining this with regard to bandit problems by an example we all know — selling stuff on the internet: When to Run Bandit Tests Instead of A/B/n Tests.
3 But I want to find the best answer, not just low certainty
This is adaptive design of experiments, a.k.a. Bayesian optimization.
4 References
Allen-Zhu, Li, Singh, et al. 2017. “Near-Optimal Design of Experiments via Regret Minimization.” In PMLR.
Chernoff. 1959. “Sequential Design of Experiments.” The Annals of Mathematical Statistics.
Even-Dar, Mannor, and Mansour. n.d. “Action Elimination and Stopping Conditions for the Multi-Armed Bandit and Reinforcement Learning Problems.”
Jamieson, and Jain. n.d. “A Bandit Approach to Multiple Testing with False Discovery Control.”
Kuleshov, and Precup. 2000. “Algorithms for the Multi-Armed Bandit Problem.” Journal of Machine Learning Research.
Lakens. 2017. “Performing High-Powered Studies Efficiently With Sequential Analyses.”
Loecher. 2021. “The Perils of Misspecified Priors and Optional Stopping in Multi-Armed Bandits.” Frontiers in Artificial Intelligence.
Press. 2009. “Bandit Solutions Provide Unified Ethical Models for Randomized Clinical Trials and Comparative Effectiveness Research.” Proceedings of the National Academy of Sciences.
Villar, Bowden, and Wason. 2015. “Multi-Armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges.” Statistical Science : A Review Journal of the Institute of Mathematical Statistics.