Optimising for an objective defined as weighted sum of multiple objectives of unknown weights can be difficult. Useful in multi task learning, for example, or in weighting regularisation in regression including neural nets.
HT Cheng Soon Ong for pointing out Jonas Degrave and Ira Korshunova’s illustrated explanation of a tricky thing, Why machine learning algorithms are hard to tune (and the fix). His summary::
Machine learning hyperparameters are hard to tune. One way to think of why it is hard, is because it is a Pareto front of multiple objectives. One way to solve that problem is to look at Lagrange multipliers, as proposed by a paper in 1988 (Platt and Barr 1987).
A follow up post describes how we can make machine learning algorithms tunable.
No comments yet. Why not leave one?