Hyperparameter optimization

Replacing a hyperparameter problem with a hyperhyperparameter problem which feels like progress



Grad student and devops engineer meet at a local optimum.

This article was originally split off from autoML, although neither topic is a strict subset of the other.

The art of choosing the best hyperparameters for a ML model’s algorithms, of which there may be many.

Should one bother getting fancy about this? Ben Recht argues that often random search is competitive with highly tuned Bayesian methods in hyperparameter tuning. Kevin Jamieson argues you can be cleverer than that though. Let’s inhale some hype.

Tracking and choosing hyperparameters

In practice this hyperparameter thing is integrated with the problem both of configuring ML and of tracking progress; See also those pages for practical implementation notes.

Bayesian/surrogate optimisation

Loosely, we think of interpolating between observations of a loss surface and guessing where the optimal point is. See Bayesian optimisation. This is generic. Not as popular in practice as I might have assumed because it turns out to be fairly greedy with data and does not exploit problem-specific ideas, such as early stopping, which is saves time and is in any case a useful type of neural net regularisation.

Multiple hyperparmeters require multi-objective optimisations

This leads to difficulty. See multi-objective optimisation.

Differentiable hyperparameter optimisation

See differentiable model selection.

Implementations

A synoptic overview of the trendiest strategies can be found in Peter Cotton’s microprediction/humpday: Elo ratings for global black box derivative-free optimizers:

Behold! Fifty strategies assigned Elo ratings depending on dimension of the problem and number of function evaluations allowed.

Hello and welcome to HumpDay, a package that helps you choose a Python global optimizer package, and strategy therein, from Ax-Platform, bayesian-optimization, DLib, HyperOpt, NeverGrad, Optuna, Platypus, PyMoo, PySOT, Scipy classic and shgo, Skopt, nlopt, Py-Bobyaq, UltraOpt and maybe others by the time you read this. It also presents some of their functionality in a common calling syntax.

The introductory blog posts are enlightening:

Most of the implementations use, explicitly or implicitly, a surrogate model for parameter tuning, but wrap it with some tools to control and launch experiments in parallel, early termination etc.

Arranged so that the top few are hyped and popular and after that are less renowed hipster options.

Not yet filed:

Determined

determined includes hyperparameter tuning which is not in fact a surrogate surface, but an early stopping pruning of crappy models in a random search, i.e. fancy random search.

Ray

Ray includes Ray.Tune

Tune is a Python library for experiment execution and hyperparameter tuning at any scale. Core features:

Optuna

optuna (Akiba et al. 2019) supports fancy neural net training; similar to hyperopt AFAICT except that is supports Covariance Matrix Adaptation, whatever that is ? (see Hansen (2016)).

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.

hyperopt.py

hyperopt J. Bergstra, Yamins, and Cox (2013)

is a Python library for optimizing over awkward search spaces with real-valued, discrete, and conditional dimensions.

Currently two algorithms are implemented in hyperopt:

  • Random Search
  • Tree of Parzen Estimators (TPE)

Hyperopt has been designed to accommodate Bayesian optimization algorithms based on Gaussian processes and regression trees, but these are not currently implemented.

All algorithms can be run either serially, or in parallel by communicating via MongoDB or Apache Spark

Hyperopt.jl

auto-sklearn

auto-sklearn has recently been upgraded. Details TBD (Feurer et al. 2020).

skopt

skopt (aka scikit-optimize)

[…] is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements several methods for sequential model-based optimization.

spearmint

spearmint/spearmint2:

Spearmint is a package to perform Bayesian optimization according to the algorithms outlined in the paper (Snoek, Larochelle, and Adams 2012).

The code consists of several parts. It is designed to be modular to allow swapping out various β€˜driver’ and β€˜chooser’ modules. The β€˜chooser’ modules are implementations of acquisition functions such as expected improvement, UCB or random. The drivers determine how experiments are distributed and run on the system. As the code is designed to run experiments in parallel (spawning a new experiment as soon a result comes in), this requires some engineering.

Spearmint2 is similar, but more recently updated and fancier; however it has a restrictive license prohibiting wide redistribution without the payment of fees. You may or may not wish to trust the implied level of development and support of 4 Harvard Professors, depending on your application.

Both of the Spearmint options (especially the latter) have opinionated choices of technology stack in order to do their optimizations, which means they can do more work for you, but require more setup, than a simple little thing like skopt. Depending on your computing environment this might be an overall plus or a minus.

SMAC

SMAC (AGPLv3)

(sequential model-based algorithm configuration) is a versatile tool for optimizing algorithm parameters (or the parameters of some other process we can run automatically, or a function we can evaluate, such as a simulation).

SMAC has helped us speed up both local search and tree search algorithms by orders of magnitude on certain instance distributions. Recently, we have also found it to be very effective for the hyperparameter optimization of machine learning algorithms, scaling better to high dimensions and discrete input dimensions than other algorithms. Finally, the predictive models SMAC is based on can also capture and exploit important information about the model domain, such as which input variables are most important.

We hope you find SMAC similarly useful. Ultimately, we hope that it helps algorithm designers focus on tasks that are more scientifically valuable than parameter tuning.

Python interface through pysmac.

AutoML

automl

Won the land-grab for the name automl but is now unmaintained.

A quick overview of buzzwords, this project automates:

  • Analytics (pass in data, and auto_ml will tell you the relationship of each variable to what it is you’re trying to predict).
  • Feature Engineering (particularly around dates, and soon, NLP).
  • Robust Scaling (turning all values into their scaled versions between the range of 0 and 1, in a way that is robust to outliers, and works with sparse matrices).
  • Feature Selection (picking only the features that actually prove useful).
  • Data formatting (turning a list of dictionaries into a sparse matrix, one-hot encoding categorical variables, taking the natural log of y for regression problems).
  • Model Selection (which model works best for your problem).
  • Hyperparameter Optimization (what hyperparameters work best for that model).
  • Ensembling Subpredictors (automatically training up models to predict smaller problems within the meta problem).
  • Ensembling Weak Estimators (automatically training up weak models on the larger problem itself, to inform the meta-estimator’s decision).

References

Abdel-Gawad, Ahmed, and Simon Ratner. 2007. β€œAdaptive Optimization of Hyperparameters in L2-Regularised Logistic Regression.”
Akiba, Takuya, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. β€œOptuna: A Next-Generation Hyperparameter Optimization Framework.” In Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
Bengio, Yoshua. 2000. β€œGradient-Based Optimization of Hyperparameters.” Neural Computation 12 (8): 1889–1900.
Bergstra, James S., RΓ©mi Bardenet, Yoshua Bengio, and BalΓ‘zs KΓ©gl. 2011. β€œAlgorithms for Hyper-Parameter Optimization.” In Advances in Neural Information Processing Systems, 2546–54. Curran Associates, Inc.
Bergstra, James, and Yoshua Bengio. 2012. β€œRandom Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research 13: 281–305.
Bergstra, J, D Yamins, and D D Cox. 2013. β€œMaking a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures.” In ICML, 9.
Domke, Justin. 2012. β€œGeneric Methods for Optimization-Based Modeling.” In International Conference on Artificial Intelligence and Statistics, 318–26.
Eggensperger, Katharina, Matthias Feurer, Frank Hutter, James Bergstra, Jasper Snoek, Holger H. Hoos, and Kevin Leyton-Brown. n.d. β€œTowards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters.”
Eigenmann, R., and J. A. Nossek. 1999. β€œGradient Based Adaptive Regularization.” In Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468), 87–94.
Feurer, Matthias, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter. 2020. β€œAuto-Sklearn 2.0: The Next Generation.” arXiv:2007.04074 [Cs, Stat], July.
Feurer, Matthias, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. β€œEfficient and Robust Automated Machine Learning.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 2962–70. Curran Associates, Inc.
Foo, Chuan-sheng, Chuong B. Do, and Andrew Y. Ng. 2008. β€œEfficient Multiple Hyperparameter Learning for Log-Linear Models.” In Advances in Neural Information Processing Systems 20, edited by J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, 377–84. Curran Associates, Inc.
Franceschi, Luca, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. 2017a. β€œOn Hyperparameter Optimization in Learning Systems.” In.
β€”β€”β€”. 2017b. β€œForward and Reverse Gradient-Based Hyperparameter Optimization.” In International Conference on Machine Learning, 1165–73. PMLR.
Fu, Jie, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, and Tat-Seng Chua. 2016. β€œDrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks.” In PRoceedings of IJCAI, 2016.
Gelbart, Michael A., Jasper Snoek, and Ryan P. Adams. 2014. β€œBayesian Optimization with Unknown Constraints.” In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, 250–59. UAI’14. Arlington, Virginia, United States: AUAI Press.
GrΓΌnewΓ€lder, Steffen, Jean-Yves Audibert, Manfred Opper, and John Shawe-Taylor. 2010. β€œRegret Bounds for Gaussian Process Bandit Problems.” In, 9:273–80.
Hansen, Nikolaus. 2016. β€œThe CMA Evolution Strategy: A Tutorial.” arXiv:1604.00772 [Cs, Stat], April.
Hutter, Frank, Holger H. Hoos, and Kevin Leyton-Brown. 2011. β€œSequential Model-Based Optimization for General Algorithm Configuration.” In Learning and Intelligent Optimization, 6683:507–23. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer, Berlin, Heidelberg.
Hutter, Frank, Holger Hoos, and Kevin Leyton-Brown. 2013. β€œAn Evaluation of Sequential Model-Based Optimization for Expensive Blackbox Functions.” In Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation, 1209–16. GECCO ’13 Companion. New York, NY, USA: ACM.
Jamieson, Kevin, and Ameet Talwalkar. 2015. β€œNon-Stochastic Best Arm Identification and Hyperparameter Optimization.” arXiv:1502.07943 [Cs, Stat], February.
Kandasamy, Kirthevasan, Akshay Krishnamurthy, Jeff Schneider, and Barnabas Poczos. 2018. β€œParallelised Bayesian Optimisation via Thompson Sampling.” In International Conference on Artificial Intelligence and Statistics, 133–42. PMLR.
Li, Liam, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar. 2020. β€œA System for Massively Parallel Hyperparameter Tuning.” arXiv:1810.05934 [Cs, Stat], March.
Li, Lisha, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2016. β€œEfficient Hyperparameter Optimization and Infinitely Many Armed Bandits.” arXiv:1603.06560 [Cs, Stat], March.
β€”β€”β€”. 2017. β€œHyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.” The Journal of Machine Learning Research 18 (1): 6765–6816.
Liu, Hanxiao, Karen Simonyan, and Yiming Yang. 2019. β€œDARTS: Differentiable Architecture Search.” arXiv:1806.09055 [Cs, Stat], April.
Lorraine, Jonathan, and David Duvenaud. 2018. β€œStochastic Hyperparameter Optimization Through Hypernetworks.” arXiv:1802.09419 [Cs], February.
Lorraine, Jonathan, Paul Vicol, and David Duvenaud. 2020. β€œOptimizing Millions of Hyperparameters by Implicit Differentiation.” In International Conference on Artificial Intelligence and Statistics, 1540–52. PMLR.
MacKay, David JC. 1999. β€œComparison of Approximate Methods for Handling Hyperparameters.” Neural Computation 11 (5): 1035–68.
Maclaurin, Dougal, David Duvenaud, and Ryan Adams. 2015. β€œGradient-Based Hyperparameter Optimization Through Reversible Learning.” In Proceedings of the 32nd International Conference on Machine Learning, 2113–22. PMLR.
Močkus, J. 1975. β€œOn Bayesian Methods for Seeking the Extremum.” In Optimization Techniques IFIP Technical Conference: Novosibirsk, July 1–7, 1974, edited by G. I. Marchuk, 400–404. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer.
O’Hagan, A. 1978. β€œCurve Fitting and Optimal Design for Prediction.” Journal of the Royal Statistical Society: Series B (Methodological) 40 (1): 1–24.
Platt, John C., and Alan H. Barr. 1987. β€œConstrained Differential Optimization.” In Proceedings of the 1987 International Conference on Neural Information Processing Systems, 612–21. NIPS’87. Cambridge, MA, USA: MIT Press.
Real, Esteban, Chen Liang, David R. So, and Quoc V. Le. 2020. β€œAutoML-Zero: Evolving Machine Learning Algorithms From Scratch,” March.
Salimans, Tim, Diederik Kingma, and Max Welling. 2015. β€œMarkov Chain Monte Carlo and Variational Inference: Bridging the Gap.” In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 1218–26. ICML’15. Lille, France: JMLR.org.
Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. 2012. β€œPractical Bayesian Optimization of Machine Learning Algorithms.” In Advances in Neural Information Processing Systems, 2951–59. Curran Associates, Inc.
Snoek, Jasper, Kevin Swersky, Rich Zemel, and Ryan Adams. 2014. β€œInput Warping for Bayesian Optimization of Non-Stationary Functions.” In Proceedings of the 31st International Conference on Machine Learning (ICML-14), 1674–82.
Srinivas, Niranjan, Andreas Krause, Sham M. Kakade, and Matthias Seeger. 2012. β€œGaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design.” IEEE Transactions on Information Theory 58 (5): 3250–65.
Swersky, Kevin, Jasper Snoek, and Ryan P Adams. 2013. β€œMulti-Task Bayesian Optimization.” In Advances in Neural Information Processing Systems 26, edited by C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, 2004–12. Curran Associates, Inc.
Thornton, Chris, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2013. β€œAuto-WEKA: Combined Selection and Hyperparameter Optimization of Classification Algorithms.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 847–55. KDD ’13. New York, NY, USA: ACM.
Turner, Ryan, David Eriksson, Michael McCourt, Juha Kiili, Eero Laaksonen, Zhen Xu, and Isabelle Guyon. 2021. β€œBayesian Optimization Is Superior to Random Search for Machine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020.” arXiv:2104.10201 [Cs, Stat], April.
Wang, Ruochen, Minhao Cheng, Xiangning Chen, Xiaocheng Tang, and Cho-Jui Hsieh. 2020. β€œRethinking Architecture Selection in Differentiable NAS.” In.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.