Split off from autoML.
The art of choosing the best hyperparameters for your ML model’s algorithms, of which there may be many.
Should you bother getting fancy about this? Ben Recht argues no, that random search is competitive with highly tuned Bayesian methods in hyperparameter tuning. Kevin Jamieson argues you can be cleverer than that though. Let’s inhale some hype.
Bayesian/surrogate optimisation
Loosely, we think of interpolating between observations of a loss surface and guessing where the optimal point is. See Bayesian optimisation. This is generic. Not as popular in practice as I might have assumed because it turns out to be fairly greedy with data and does not exploit problem-specific ideas, such as early stopping, which is saves time and is in any case a useful type of neural net regularisation.
HT Chen-soon Ong for pointing out Why machine learning algorithms are hard to tune (and the fix). His summary:
Machine learning hyperparameters are hard to tune. One way to think of why it is hard, is because it is a pareto front of multiple objectives. One way to solve that problem is to look at Lagrange multipliers, as proposed by a paper in 1988.
Differentiable hyperparameter optimisation
Random search
Just what you would think.
Adaptive random search
Now it comes in an adaptive flavour that leverages the SGD fitting method e.g. Liam Li et al. (2020). called hyperband Lisha Li et al. (2017)/ ASHA.
Implementations
Most of the implementations here use, internally, a surrogate model for parameter tuning, but wrap it with some tools to control and launch experiments in parallel, early termination etc.
Arranged so that the top few are hyped and popular and after that are less renowed hipster options.
Not yet filed:
- Keras Tuner
- Tune: Scalable Hyperparameter Tuning — Ray v2.0.0.dev0
- Welcome To Neural Network Intelligence !!! — An open source AutoML toolkit for neural architecture search, model compression and hyper-parameter tuning (NNI v2.0)
- AutoGluon: AutoML Toolkit for Deep Learning — AutoGluon Documentation 0.0.14 documentation
Determined
determined includes hyperparameter tuning which is not in fact a surrogate surface, but an early stopping pruning of crappy models in a random search., i.e. fancy random search.
Ray
Tune is a Python library for experiment execution and hyperparameter tuning at any scale. Core features:
- Launch a multi-node distributed hyperparameter sweep in less than 10 lines of code.
- Supports any machine learning framework, including PyTorch, XGBoost, MXNet, and Keras.
- Automatically manages checkpoints and logging to TensorBoard.
- Choose among state of the art algorithms such as Population Based Training (PBT), BayesOptSearch, HyperBand/ASHA. (Liam Li et al. 2020)
Optuna
optuna (Akiba et al. 2019) supports fancy neural net training; similar to hyperopt AFAICT except that is supports Covariance Matrix Adaptation, whatever that is ? (see Hansen (2016)).
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
hyperopt
hyperopt
J. Bergstra, Yamins, and Cox (2013)
is a Python library for optimizing over awkward search spaces with real-valued, discrete, and conditional dimensions.
Currently two algorithms are implemented in hyperopt:
- Random Search
- Tree of Parzen Estimators (TPE)
Hyperopt has been designed to accommodate Bayesian optimization algorithms based on Gaussian processes and regression trees, but these are not currently implemented.
All algorithms can be run either serially, or in parallel by communicating via MongoDB or Apache Spark
auto-sklearn
auto-sklearn has recently been upgraded. details TBD.@FeurerAutoSklearn2020
skopt
skopt
(aka scikit-optimize
)
[…] is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements several methods for sequential model-based optimization.
spearmint
Spearmint is a package to perform Bayesian optimization according to the algorithms outlined in the paper (Snoek, Larochelle, and Adams 2012).
The code consists of several parts. It is designed to be modular to allow swapping out various ‘driver’ and ‘chooser’ modules. The ‘chooser’ modules are implementations of acquisition functions such as expected improvement, UCB or random. The drivers determine how experiments are distributed and run on the system. As the code is designed to run experiments in parallel (spawning a new experiment as soon a result comes in), this requires some engineering.
Spearmint2
is similar, but more recently
updated and fancier; however it has a restrictive license prohibiting wide
redistribution without the payment of fees. You may or may not wish to trust
the implied level of development and support of 4 Harvard Professors,
depending on your application.
Both of the Spearmint options (especially the latter) have opinionated
choices of technology stack in order to do their optimizations, which means
they can do more work for you, but require more setup, than a simple little
thing like skopt
.
Depending on your computing environment this might be an overall plus or a
minus.
SMAC
SMAC
(AGPLv3)
(sequential model-based algorithm configuration) is a versatile tool for optimizing algorithm parameters (or the parameters of some other process we can run automatically, or a function we can evaluate, such as a simulation).
SMAC has helped us speed up both local search and tree search algorithms by orders of magnitude on certain instance distributions. Recently, we have also found it to be very effective for the hyperparameter optimization of machine learning algorithms, scaling better to high dimensions and discrete input dimensions than other algorithms. Finally, the predictive models SMAC is based on can also capture and exploit important information about the model domain, such as which input variables are most important.
We hope you find SMAC similarly useful. Ultimately, we hope that it helps algorithm designers focus on tasks that are more scientifically valuable than parameter tuning.
Python interface through pysmac.
AutoML
Won the land-grab for the name automl
but is now unmaintained.
A quick overview of buzzwords, this project automates:
- Analytics (pass in data, and auto_ml will tell you the relationship of each variable to what it is you’re trying to predict).
- Feature Engineering (particularly around dates, and soon, NLP).
- Robust Scaling (turning all values into their scaled versions between the range of 0 and 1, in a way that is robust to outliers, and works with sparse matrices).
- Feature Selection (picking only the features that actually prove useful).
- Data formatting (turning a list of dictionaries into a sparse matrix, one-hot encoding categorical variables, taking the natural log of y for regression problems).
- Model Selection (which model works best for your problem).
- Hyperparameter Optimization (what hyperparameters work best for that model).
- Ensembling Subpredictors (automatically training up models to predict smaller problems within the meta problem).
- Ensembling Weak Estimators (automatically training up weak models on the larger problem itself, to inform the meta-estimator’s decision).
No comments yet!