Optimisation



Crawling through alien landscapes in the fog, looking for mountain peaks.

I’m mostly interested in continuous optimisation, but, you know, combinatorial optimisation is a whole thing.

A vast topic, with many sub-topics. I have neither the time nor the expertise to construct a detailed map of these As Moritz Hardt observes (and this is just in the convex context),

It’s easy to spend a semester of convex optimization on various guises of gradient descent alone. Simply pick one of the following variants and work through the specifics of the analysis: conjugate, accelerated, projected, conditional, mirrored, stochastic, coordinate, online. This is to name a few. You may also choose various pairs of attributes such as “accelerated coordinate” descent. Many triples are also valid such as “online stochastic mirror” descent. An expert unlike me would know exactly which triples are admissible. You get extra credit when you use “subgradient” instead of “gradient”. This is really only the beginning of optimization and it might already seem confusing.

When I was even younger and yet more foolish I decided the divide was between online optimization and offline optimization, which in hindsight is neither a clear nor useful taxonomy for the problems facing me. Now there are more tightly topical pages, such as gradient descent, and 2nd order methods, surrogate optimisation, constrained optimisation, and I shall create additional such as circumstances demand.

TODO: insert brief taxonomy here.

🏗 Diagram.

See Zeyuan Allen-Zhu and Elad Hazan on their teaching strategy which also gives a split into 16 different areas:

The following dilemma is encountered by many of my friends when teaching basic optimization: which variant/proof of gradient descent should one start with? Of course, one needs to decide on which depth of convex analysis one should dive into, and decide on issues such as “should I define strong-convexity?”, “discuss smoothness?”, “Nesterov acceleration?”, etc.

[…] If one wishes to go into more depth, usually in convex optimization courses, one covers the full spectrum of different smoothness/ strong-convexity/ acceleration/ stochasticity regimes, each with a separate analysis (a total of 16 possible configurations!)

This year I’ve tried something different in COS511 @ Princeton, which turns out also to have research significance. We’ve covered basic GD for well-conditioned functions, i.e. smooth and strongly-convex functions, and then extended these result by reduction to all other cases! A (simplified) outline of this teaching strategy is given in chapter 2 of Introduction to Online Convex Optimization.

Classical Strong-Convexity and Smoothness Reductions:

Given any optimization algorithm A for the well-conditioned case (i.e., strongly convex and smooth case), we can derive an algorithm for smooth but not strongly functions as follows.

Given a non-strongly convex but smooth objective \(f\), define a objective by \(f_1(x)=f(x)+e\|x\|^2\).

It is straightforward to see that \(f_1\) differs from \(f\) by at most ϵ times a distance factor, and in addition it is ϵ-strongly convex. Thus, one can apply A to minimize \(f_1\) and get a solution which is not too far from the optimal solution for \(f\) itself. This simplistic reduction yields an almost optimal rate, up to logarithmic factors.

Keywords: Complimentary slackness theorem, High or very high dimensional methods, approximate method, Lagrange multipliers, primal and dual problems, fixed point methods, gradient, subgradient, proximal gradient, optimal control problems, convexity, sparsity, ways to avoid wrecking finding the extrema of perfectly simple little 10000-parameter functions before everyone observes that I am a fool in the guise of a mathematician but everyone is not there because I wandered off the optimal path hours ago, and now I am alone and lost in a valley of lower-case Greek letters.

See also geometry of fitness landscapes, expectation maximisation, matrix factorisations, discrete optimisation, nature-inspired “meta-heuristic” optimisation.

History

Grötschel (2012)

Brief intro material

Textbooks

Whole free textbooks online. Mostly convex.

Alternating Direction Method of Multipliers

Dunno. It’s everywhere, though. (S. Boyd 2010)

In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas—Rachford splitting, Spingarn’s method of partial inverses, Dykstra’s alternating projections, Bregman iterative algorithms for \(\ell_1\) problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop Map Reduce implementations.

Optimisation on manifolds

See Nicolas Boumen’s introductory blog post.

Optimization on manifolds is about solving problems of the form

\[\mathrm{minimize}_{x\in\mathcal{M}} f(x),\]

where \(\mathcal{M}\) is a nice, known manifold. By “nice”, I mean a smooth, finite-dimensional Riemannian manifold.

Practical examples include the following (and all possible products of these):

  • Euclidean spaces
  • The sphere (set of vectors or matrices with unit Euclidean norm)
  • The Stiefel manifold (set of orthonormal matrices)
  • The Grassmann manifold (set of linear subspaces of a given dimension; this is a quotient space)
  • The rotation group (set of orthogonal matrices with determinant +1)
  • The manifold of fixed-rank matrices
  • The same, further restricted to positive semidefinite matrices
  • The cone of (strictly) positive definite matrices

Conceptually, the key point is to think of optimization on manifolds as unconstrained optimization: we do not think of \(\mathcal{M}\) as being embedded in a Euclidean space. Rather, we think of \(\mathcal{M}\) as being “the only thing that exists,” and we strive for intrinsic methods. Besides making for elegant theory, it also makes it clear how to handle abstract spaces numerically (such as the Grassmann manifold for example); and it gives algorithms the “right” invariances (computations do not depend on an arbitrarily chosen representation of the manifold).

There are at least two reasons why this class of problems is getting much attention lately. First, it is because optimization problems over the aforementioned sets (mostly matrix sets) come up pervasively in applications, and at some point it became clear that the intrinsic viewpoint leads to better algorithms, as compared to general-purpose constrained optimization methods (where \(\mathcal{M}\) is considered as being inside a Euclidean space \(\mathcal{E}\), and algorithms move in \(\mathcal{E}\), while penalizing distance to \(\mathcal{M}\)). The second is that, as I will argue momentarily, Riemannian manifolds are “the right setting” to talk about unconstrained optimization. And indeed, there is a beautiful book by [Absil, Sepulchre, Mahony], called Optimization algorithms on matrix manifolds (freely available), that shows how the classical methods for unconstrained optimization (gradient descent, Newton, trust-regions, conjugate gradients…) carry over seamlessly to the more general Riemannian framework.

Gradient-free optimization

Not all the methods described here use gradient information, but it’s frequently assumed to be something you can access easily. It’s worth considering which objectives you can optimize easily

But not all objectives are easily differentiable, even when parameters are continuous. For example, if you are not getting your measurement from a mathematical model, but from a physical experiment you can’t differentiate it since reality itself is usually not analytically differentiable. In this latter case, you are getting close to a question of online experiment design, as in ANOVA, and a further constraint that your function evaluations are possibly stupendously expensive. See Bayesian optimisation for one approach to this i the context of experiment design.

In general situations like this we use gradient-free methods, such as simulated annealing or numerical gradient etc.

“Meta-heuristic” methods

Biologically-inspired or arbitrary. Evolutionary algorithms, particle swarm optimisation, ant colony optimisation, harmony search. A lot of the tricks from these are adopted into mainstream stochastic methods. Some not.

See biometic algorithms for the care and husbandry of such as those.

Annealing and Monte Carlo optimisation methods

Simulated annealing: Constructing a process to yield maximally-likely estimates for the parameters. This has a statistical mechanics justification that makes it attractive to physicists; But it’s generally useful. You don’t necessarily need a gradient here, just the ability to evaluate something interpretable as a “likelihood ratio”. Long story. I don’t yet cover this at Monte Carlo methods but I should.

Elad Hazan The two cultures of optimization:

The standard curriculum in high school math includes elementary functional analysis, and methods for finding the minima, maxima and saddle points of a single dimensional function. When moving to high dimensions, this becomes beyond the reach of your typical high-school student: mathematical optimization theory spans a multitude of involved techniques in virtually all areas of computer science and mathematics.

Iterative methods, in particular, are the most successful algorithms for large-scale optimization and the most widely used in machine learning. Of these, most popular are first-order gradient-based methods due to their very low per-iteration complexity.

However, way before these became prominent, physicists needed to solve large scale optimization problems, since the time of the Manhattan project at Los Alamos. The problems that they faced looked very different, essentially simulation of physical experiments, as were the solutions they developed. The Metropolis algorithm is the basis for randomized optimization methods and Markov Chain Monte Carlo algorithms. […]

In our recent paper (Abernethy and Hazan 2016), we show that for convex optimization, the heat path and central path for IPM for a particular barrier function (called the entropic barrier, following the terminology of the recent excellent work of Bubeck and Eldan) are identical! Thus, in some precise sense, the two cultures of optimization have been studied the same object in disguise and using different techniques.

Expectation maximization

See expectation maximisation.

Parallel

Classic, basic SGD takes walks through the data set example-wise or feature-wise — but this doesn’t work in parallel, so you tend to go for mini-batch gradient descent so that you can at least vectorize. Apparently you can make SGD work in “true” parallel across communication-constrained cores, but I don’t yet understand how.

Implementations

Specialised optimisation software.

See also statistical software, and gradient descent

  • GENO (Soeren Laue, Mitterreiter, and Giesen 2019; Sören Laue, Blacher, and Giesen 2022)

    GENO provides optimization solvers for everyone. You can enter your optimization problem in an easy-to-read modeling language in the code editor below. Python code is then generated automatically that can solve this class of optimization problems on the CPU or on the GPU. The automatically generated solvers are often as fast as handwritten, specialized solvers…

    The GENO solver combines an Augmented Lagrangian approach with a limited memory quasi-Newton method (L-BFGS-B) that can handle also bound constraints on the variables. Quasi-Newton methods are very efficient for problems involving thousands of optimization variables. The GENO solver is then instantiated by the automatically generated methods for computing function values and gradients that are provided by this website to solve the specified class of optimization problems. This approach is very well suited for optimization problems originating from classical machine learning problems.

    Looks useful for an interesting class of semidefinite programming problems.

  • ensmallen (Bhardwaj et al. 2021)

    We present ensmallen, a fast and flexible C++ library for mathematical optimization of arbitrary user-supplied functions, which can be applied to many machine learning problems. Several types of optimizations are supported, including differentiable, separable, constrained, and categorical objective functions. The library provides many pre-built optimizers (including numerous variants of SGD and Quasi-Newton optimizers) as well as a flexible framework for implementing new optimizers and objective functions. Implementation of a new optimizer requires only one method and a new objective function requires typically one or two C++ functions. This can aid in the quick implementation and prototyping of new machine learning algorithms. Due to the use of C++ template metaprogramming, ensmallen is able to support compiler optimizations that provide fast runtimes. Empirical comparisons show that ensmallen is able to outperform other optimization frameworks (like Julia and SciPy), sometimes by large margins. The library is distributed under the BSD license and is ready for use in production environments.

  • SPORCO a Python package for solving optimisation problems with sparsity-inducing regularisation. These consist primarily of sparse coding and dictionary learning problems, including convolutional sparse coding and dictionary learning, but there is also support for other problems such as Total Variation regularisation and Robust PCA. In the current version, all of the optimisation algorithms are based on the Alternating Direction Method of Multipliers (ADMM).

  • scipy.optimise.minimize: The python default. Includes many different algorithms than can do whatever you want. Failure modes are opaque, online-only and they don’t support warm-restarts, which is a thing for me, but a good starting point unless you have reason to prefer others. (i.e. if all your data does not fit in RAM, don’t bother.)

  • spams

    SPAMS (SPArse Modeling Software) is an optimization toolbox for solving various sparse estimation problems. Dictionary learning and matrix factorization (NMF, sparse PCA, …) Solving sparse decomposition problems with LARS, coordinate descent, OMP, SOMP, proximal methods Solving structured sparse decomposition problems (\(ell_1/ell_2,\) \(\ell_1/\ell_\infty,\) sparse group lasso, tree-structured regularization structured sparsity with overlapping groups,…). It is developped by Julien Mairal, with the collaboration of Francis Bach, Jean Ponce, Guillermo Sapiro, Rodolphe Jenatton and Guillaume Obozinski. It is coded in C++ with a Matlab interface. Recently, interfaces for R and Python have been developed by Jean-Paul Chieze (INRIA), and archetypal analysis was written by Yuansi Chen (UC Berkeley).

  • picos

    …is a user friendly interface to several conic and integer programming solvers, very much like YALMIP or CVX under MATLAB.

    The main motivation for PICOS is to have the possibility to enter an optimization problem as a high level model, and to be able to solve it with several different solvers. Multidimensional and matrix variables are handled in a natural fashion, which makes it painless to formulate a SDP or a SOCP. This is very useful for educational purposes, and to quickly implement some models and test their validity on simple examples.

    also maintains a list of other solvers.

  • Manifold optimisation implementations (for e.g. learning on manifolds)

  • cvxopt

    … is a free software package for convex optimization based on the Python programming language. It can be used with the interactive Python interpreter, on the command line by executing Python scripts, or integrated in other software via Python extension modules. Its main purpose is to make the development of software for convex optimization applications straightforward by building on Python’s extensive standard library and on the strengths of Python as a high-level programming language. […]

    • efficient Python classes for dense and sparse matrices (real and complex), with Python indexing and slicing and overloaded operations for matrix arithmetic

    • an interface to most of the double-precision real and complex BLAS

    • an interface to LAPACK routines for solving linear equations and least-squares problems, matrix factorisations (LU, Cholesky, LDLT and QR), symmetric eigenvalue and singular value decomposition, and Schur factorization

    • an interface to the fast Fourier transform routines from FFTW

    • interfaces to the sparse LU and Cholesky solvers from UMFPACK and CHOLMOD

    • routines for linear, second-order cone, and semidefinite programming problems

    • routines for nonlinear convex optimization

    • interfaces to the linear programming solver in GLPK, the semidefinite programming solver in DSDP5, and the linear, quadratic and second-order cone programming solvers in MOSEK

    • a modeling tool for specifying convex piecewise-linear optimization problems.

    seems to reinvent half of numpy and scipy. Also seems to be used by the all the other python packages.

  • pyomo

    Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating, solving, and analyzing optimization models.

    A core capability of Pyomo is modeling structured optimization applications. Pyomo can be used to define general symbolic problems, create specific problem instances, and solve these instances using commercial and open-source solvers. Pyomo’s modeling objects are embedded within a full-featured high-level programming language providing a rich set of supporting libraries, which distinguishes Pyomo from other algebraic modeling languages like AMPL, AIMMS and GAMS.…

    Pyomo was formerly released as the Coopr software library.

  • cvxpy

    …is a Python-embedded modeling language for convex optimization problems. It allows you to express your problem in a natural way that follows the math, rather than in the restrictive standard form required by solvers.

    So it’s a DSL for convex constraint programming. Can be extended heuristically to nonconvex constraints by…

  • ncvx

    … is a package for modeling and solving problems with convex objectives and decision variables from a nonconvex set. This package provides heuristic such as NC-ADMM (a variation of alternating direction method of multipliers for nonconvex problems) and relax-round-polish, which can be viewed as a majorization-minimization algorithm. The solver methods provided and the syntax for constructing problems are discussed in our associated paper.

  • NLopt

    … is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms. Its features include:

    • Callable from C, C++, Fortran, Matlab or GNU Octave, Python, GNU Guile, Julia, GNU R, Lua, and OCaml.

    • A common interface for many different algorithms—try a different algorithm just by changing one parameter.

    • Support for large-scale optimization (some algorithms scalable to millions of parameters and thousands of constraints)…

    • Algorithms using function values only (derivative-free) and also algorithms exploiting user-supplied gradients.

  • TFOCS:

    …(pronounced tee-fox) provides a set of Matlab templates, or building blocks, that can be used to construct efficient, customized solvers for a variety of convex models, including in particular those employed in sparse recovery applications. It was conceived and written by Stephen Becker, Emmanuel J. Candès and Michael Grant.

  • stan is famous for Monte Carlo sampling, but also does deterministic optimisation using automatic differentiation. this is a luxurious “full service” option, although with limited scope for customisation; Curious how it performs in very high dimensions, as L-BFGS does not scale forever.

    Optimization algorithms:

    • Limited-memory BFGS (Stan’s default optimization algorithm)

    • BFGS

    • Laplace’s method for classical standard error estimates and approximate Bayesian posteriors

  • Optim.jl is a generic optimizer for julia

  • JuMP.jl is a domain-specific modeling language for mathematical optimization embedded in Julia. It currently supports a number of open-source and commercial solvers (Bonmin, Cbc, Clp, Couenne, CPLEX, ECOS, FICO Xpress, GLPK, Gurobi, Ipopt, KNITRO, MOSEK, NLopt, SCS, BARON) for a variety of problem classes, including linear programming, (mixed) integer programming, second-order conic programming, semidefinite programming, and nonlinear programming.

  • NLsolve.jl solves systems of nonlinear equations. […]

    The package is also able to solve mixed complementarity problems, which are similar to systems of nonlinear equations, except that the equality to zero is allowed to become an inequality if some boundary condition is satisfied. See further below for a formal definition and the related commands.

    Since there is some overlap between optimizers and nonlinear solvers, this package borrows some ideas from the Optim package, and depends on it for linesearch algorithms.

Many of these solvers optionally use commercial backends such as Mosek.

To file

Miscellaneous optimisation techniques suggested on Linkedin

The whole world of exotic specialized optimisers. See, e.g. Nuit Blanche name-dropping Bregmann iteration, alternating method, augmented Lagrangian…

Primal/dual problems

🏗

Majorization-minorization

🏗

Difference-of-Convex-objectives

When your objective function is not convex but you can represent it in terms of convex functions, somehow or other, use DC optimisation. (Gasso, Rakotomamonjy, and Canu 2009) (I don’t think this guarantees you a global optimum, but rather faster convergence to a local one)

References

Abadi, Martın, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, and Matthieu Devin. 2016. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.” arXiv Preprint arXiv:1603.04467.
Abernethy, Jacob, and Elad Hazan. 2016. Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier.” In International Conference on Machine Learning, 2520–28. PMLR.
Achlioptas, Dimitris, Assaf Naor, and Yuval Peres. 2005. Rigorous Location of Phase Transitions in Hard Optimization Problems.” Nature 435 (7043): 759–64.
Agarwal, Alekh, Olivier Chapelle, Miroslav Dudık, and John Langford. 2014. A Reliable Effective Terascale Linear Learning System.” Journal of Machine Learning Research 15 (1): 1111–33.
Agarwal, Naman, Brian Bullins, and Elad Hazan. 2016. Second Order Stochastic Optimization in Linear Time.” arXiv:1602.03943 [Cs, Stat], February.
Allen-Zhu, Zeyuan, and Elad Hazan. 2016. Optimal Black-Box Reductions Between Optimization Objectives.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 1606–14. Curran Associates, Inc.
Arnold, Sébastien M. R., and Chunming Wang. 2017. Accelerating SGD for Distributed Deep-Learning Using Approximated Hessian Matrix.” In arXiv:1709.05069 [Cs].
Ba, Jimmy, Roger Grosse, and James Martens. 2016. Distributed Second-Order Optimization Using Kronecker-Factored Approximations,” November.
Bach, Francis. 2013. Convex Relaxations of Structured Matrix Factorizations.” arXiv:1309.3117 [Cs, Math], September.
Bach, Francis R., and Eric Moulines. 2013. Non-Strongly-Convex Smooth Stochastic Approximation with Convergence Rate O(1/n).” In arXiv:1306.2119 [Cs, Math, Stat], 773–81.
Bach, Francis, Rodolphe Jenatton, and Julien Mairal. 2011. Optimization With Sparsity-Inducing Penalties. Foundations and Trends(r) in Machine Learning 1.0. Now Publishers Inc.
Bach, Francis, and Eric Moulines. 2011. Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning.” In Advances in Neural Information Processing Systems (NIPS), –. Spain.
Battiti, Roberto. 1992. First-and Second-Order Methods for Learning: Between Steepest Descent and Newton’s Method.” Neural Computation 4 (2): 141–66.
Beck, Amir, and Marc Teboulle. 2009. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems.” SIAM Journal on Imaging Sciences 2 (1): 183–202.
Bertsimas, D., and I. Popescu. 2005. Optimal Inequalities in Probability Theory: A Convex Optimization Approach.” SIAM Journal on Optimization 15 (3): 780–804.
Bhardwaj, Shikhar, Ryan R Curtin, Marcus Edel, Yannis Mentekidis, and Conrad Sanderson. 2021. “Ensmallen: A FLexible C++ Library for Efficient Function Optimization,” 8.
Bian, Wei, Xiaojun Chen, and Yinyu Ye. 2014. Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization.” Mathematical Programming 149 (1-2): 301–27.
Bordes, Antoine, Léon Bottou, and Patrick Gallinari. 2009. SGD-QN: Careful Quasi-Newton Stochastic Gradient Descent.” Journal of Machine Learning Research 10 (December): 1737–54.
Botev, Aleksandar, Guy Lever, and David Barber. 2016. Nesterov’s Accelerated Gradient and Momentum as Approximations to Regularised Update Descent.” arXiv:1607.01981 [Cs, Stat], July.
Botev, Zdravko I., and Chris J. Lloyd. 2015. Importance Accelerated Robbins-Monro Recursion with Applications to Parametric Confidence Limits.” Electronic Journal of Statistics 9 (2): 2058–75.
Bottou, Léon. 1991. Stochastic Gradient Learning in Neural Networks.” In Proceedings of Neuro-Nîmes 91. Nimes, France: EC2.
———. 1998. Online Algorithms and Stochastic Approximations.” In Online Learning and Neural Networks, edited by David Saad, 17:142. Cambridge, UK: Cambridge University Press.
———. 2010. Large-Scale Machine Learning with Stochastic Gradient Descent.” In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010), 177–86. Paris, France: Springer.
———. 2012. Stochastic Gradient Descent Tricks.” In Neural Networks: Tricks of the Trade, 421–36. Lecture Notes in Computer Science. Springer, Berlin, Heidelberg.
Bottou, Léon, and Olivier Bousquet. 2008. The Tradeoffs of Large Scale Learning.” In Advances in Neural Information Processing Systems, edited by J.C. Platt, D. Koller, Y. Singer, and S. Roweis, 20:161–68. NIPS Foundation (http://books.nips.cc).
Bottou, Léon, Frank E. Curtis, and Jorge Nocedal. 2016. Optimization Methods for Large-Scale Machine Learning.” arXiv:1606.04838 [Cs, Math, Stat], June.
Bottou, Léon, and Yann LeCun. 2004. Large Scale Online Learning.” In Advances in Neural Information Processing Systems 16, edited by Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf. Cambridge, MA: MIT Press.
Boumal, Nicolas, Bamdev Mishra, P.-A. Absil, and Rodolphe Sepulchre. 2014. Manopt, a Matlab Toolbox for Optimization on Manifolds.” Journal of Machine Learning Research 15: 1455–59.
Boyd, Stephen. 2010. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Vol. 3. Now Publishers Inc.
Boyd, Stephen P., and Lieven Vandenberghe. 2004. Convex Optimization. Cambridge, UK ; New York: Cambridge University Press.
Bubeck, Sébastien. 2015. Convex Optimization: Algorithms and Complexity. Vol. 8. Foundations and Trends in Machine Learning. Now Publishers.
Bubeck, Sébastien, and Ronen Eldan. 2014. The Entropic Barrier: A Simple and Optimal Universal Self-Concordant Barrier.” arXiv:1412.1587 [Cs, Math], December.
Cevher, Volkan, Stephen Becker, and Mark Schmidt. 2014. Convex Optimization for Big Data.” IEEE Signal Processing Magazine 31 (5): 32–43.
Chartrand, R., and Wotao Yin. 2008. Iteratively Reweighted Algorithms for Compressive Sensing.” In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 3869–72.
Chen, Xiaojun. 2012. Smoothing Methods for Nonsmooth, Nonconvex Minimization.” Mathematical Programming 134 (1): 71–99.
Chen, Xiaojun, Dongdong Ge, Zizhuo Wang, and Yinyu Ye. 2012. Complexity of Unconstrained L_2-L_p.” Mathematical Programming 143 (1-2): 371–83.
Cho, Minhyung, Chandra Shekhar Dhir, and Jaehyung Lee. 2015. Hessian-Free Optimization for Learning Deep Multidimensional Recurrent Neural Networks.” In Advances In Neural Information Processing Systems.
Choromanska, Anna, MIkael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. 2015. The Loss Surfaces of Multilayer Networks.” In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, 192–204.
Chretien, Stephane. 2008. An Alternating L1 Approach to the Compressed Sensing Problem.” arXiv:0809.0660 [Stat], September.
Combettes, Patrick L., and Jean-Christophe Pesquet. 2008. A Proximal Decomposition Method for Solving Convex Variational.” Inverse Problems 24 (6): 065014.
Dalalyan, Arnak S. 2017. Further and Stronger Analogy Between Sampling and Optimization: Langevin Monte Carlo and Gradient Descent.” arXiv:1704.04752 [Math, Stat], April.
Dauphin, Yann, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. 2014. Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization.” In Advances in Neural Information Processing Systems 27, 2933–41. Curran Associates, Inc.
Defazio, Aaron, Francis Bach, and Simon Lacoste-Julien. 2014. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives.” In Advances in Neural Information Processing Systems 27.
Dennis, J. E., and Robert B. Schnabel. 1989. Chapter I A View of Unconstrained Optimization.” In Handbooks in Operations Research and Management Science, 1:1–72. Optimization. Elsevier.
Dennis, J., and R. Schnabel. 1996. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Vol. 16. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics.
DeVore, Ronald A. 1998. Nonlinear Approximation.” Acta Numerica 7 (January): 51–150.
Ding, Lijun, and Madeleine Udell. 2018. Frank-Wolfe Style Algorithms for Large Scale Optimization,” August.
Duchi, John, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” Journal of Machine Learning Research 12 (Jul): 2121–59.
Fletcher, R., and M. J. D. Powell. 1963. A Rapidly Convergent Descent Method for Minimization.” The Computer Journal 6 (2): 163–68.
Forrester, Alexander I. J., and Andy J. Keane. 2009. Recent Advances in Surrogate-Based Optimization.” Progress in Aerospace Sciences 45 (1–3): 50–79.
Friedlander, Michael P., and Mark Schmidt. 2012. Hybrid Deterministic-Stochastic Methods for Data Fitting.” SIAM Journal on Scientific Computing 34 (3): A1380–1405.
Friedman, Jerome H. 2002. Stochastic Gradient Boosting.” Computational Statistics & Data Analysis, Nonlinear Methods and Data Mining, 38 (4): 367–78.
Friedman, Jerome, Trevor Hastie, Holger Höfling, and Robert Tibshirani. 2007. Pathwise Coordinate Optimization.” The Annals of Applied Statistics 1 (2): 302–32.
Friedman, Jerome, Trevor Hastie, and Rob Tibshirani. 2010. Regularization Paths for Generalized Linear Models via Coordinate Descent.” Journal of Statistical Software 33 (1): 1–22.
Gallier, Jean, and Jocelyn Quaintance. 2022. Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning.
Gasso, G., A. Rakotomamonjy, and S. Canu. 2009. Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming.” IEEE Transactions on Signal Processing 57 (12): 4686–98.
Ghadimi, Saeed, and Guanghui Lan. 2013a. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming.” SIAM Journal on Optimization 23 (4): 2341–68.
———. 2013b. Accelerated Gradient Methods for Nonconvex Nonlinear and Stochastic Programming.” arXiv:1310.3787 [Math], October.
Giles, Mike B. 2008. Collected Matrix Derivative Results for Forward and Reverse Mode Algorithmic Differentiation.” In Advances in Automatic Differentiation, edited by Christian H. Bischof, H. Martin Bücker, Paul Hovland, Uwe Naumann, and Jean Utke, 64:35–44. Berlin, Heidelberg: Springer Berlin Heidelberg.
Goldstein, A. 1965. On Steepest Descent.” Journal of the Society for Industrial and Applied Mathematics Series A Control 3 (1): 147–51.
Goldstein, Tom, Christoph Studer, and Richard Baraniuk. 2015. FASTA: A Generalized Implementation of Forward-Backward Splitting.” arXiv:1501.04979 [Cs, Math], January.
Grötschel, Martin, ed. 2012. Optimization Stories: 21st International Symposium on Mathematical Programming, Berlin, August 19 - 24, 2012. Documenta Mathematica, 2012 : Extra vol. Bielefeld.
Han, Zhong-Hua, and Ke-Shi Zhang. 2012. Surrogate-Based Optimization.”
Harchaoui, Zaid, Anatoli Juditsky, and Arkadi Nemirovski. 2015. Conditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization.” Mathematical Programming 152 (1-2): 75–112.
Hazan, Elad, Kfir Levy, and Shai Shalev-Shwartz. 2015. Beyond Convexity: Stochastic Quasi-Convex Optimization.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 1594–1602. Curran Associates, Inc.
Heyde, C. C. 1974. On Martingale Limit Theory and Strong Convergence Results for Stochastic Approximation Procedures.” Stochastic Processes and Their Applications 2 (4): 359–70.
Hinton, Geoffrey, Nitish Srivastava, and Kevin Swersky. n.d. “Neural Networks for Machine Learning.”
Hosseini, Reshad, and Suvrit Sra. 2015. Manifold Optimization for Gaussian Mixture Models.” arXiv Preprint arXiv:1506.07677.
Hu, Chonghai, Weike Pan, and James T. Kwok. 2009. Accelerated Gradient Methods for Stochastic Optimization and Online Learning.” In Advances in Neural Information Processing Systems, 781–89. Curran Associates, Inc.
Jaggi, Martin. 2013. Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization.” In Journal of Machine Learning Research, 427–35.
Jakovetic, D., J.M. Freitas Xavier, and J.M.F. Moura. 2014. Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks.” IEEE Transactions on Signal Processing 62 (4): 868–82.
Kim, Daeun, and Justin P. Haldar. 2016. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery.” Signal Processing 125 (August): 274–89.
Kingma, Diederik, and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization.” Proceeding of ICLR.
Klein, Dan. 2004. Lagrange Multipliers Without Permanent Scarring.” University of California at Berkeley, Computer Science Division.
Lai, Tze Leung. 2003. Stochastic Approximation.” The Annals of Statistics 31 (2): 391–406.
Langford, John, Lihong Li, and Tong Zhang. 2009. Sparse Online Learning via Truncated Gradient.” In Advances in Neural Information Processing Systems 21, edited by D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, 905–12. Curran Associates, Inc.
Laue, Soeren, Matthias Mitterreiter, and Joachim Giesen. 2019. GENO – GENeric Optimization for Classical Machine Learning.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc.
Laue, Sören, Mark Blacher, and Joachim Giesen. 2022. Optimization for Classical Machine Learning Problems on the GPU.” In Proceedings of the AAAI Conference on Artificial Intelligence, 36:7300–7308.
Levy, Kfir Y. 2016. The Power of Normalization: Faster Evasion of Saddle Points.” arXiv:1611.04831 [Cs, Math, Stat], November.
Li, Yuanzhi, Yingyu Liang, and Andrej Risteski. 2016. Recovery Guarantee of Non-Negative Matrix Factorization via Alternating Updates.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 4988–96. Curran Associates, Inc.
Lin, Hongzhou, Julien Mairal, and Zaid Harchaoui. 2016. QuickeNing: A Generic Quasi-Newton Algorithm for Faster Gradient-Based Optimization *.” In arXiv:1610.00960 [Math, Stat].
Lucchi, Aurelien, Brian McWilliams, and Thomas Hofmann. 2015. A Variance Reduced Stochastic Newton Method.” arXiv:1503.08316 [Cs], March.
Ma, Chenxin, Jakub Konečnỳ, Martin Jaggi, Virginia Smith, Michael I. Jordan, Peter Richtárik, and Martin Takáč. 2015. Distributed Optimization with Arbitrary Local Solvers.” arXiv Preprint arXiv:1512.04039.
Ma, Siyuan, and Mikhail Belkin. 2017. Diving into the Shallows: A Computational Perspective on Large-Scale Shallow Learning.” arXiv:1703.10622 [Cs, Stat], March.
Madsen, K, H.B. Nielsen, and O. Tingleff. 2004. Methods for Non-Linear Least Squares Problems.”
Mairal, J. 2015. Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning.” SIAM Journal on Optimization 25 (2): 829–55.
Mairal, Julien. 2013a. Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization.” In Advances in Neural Information Processing Systems, 2283–91.
———. 2013b. Optimization with First-Order Surrogate Functions.” In International Conference on Machine Learning, 783–91.
Mairal, Julien, Francis Bach, and Jean Ponce. 2014. Sparse Modeling for Image and Vision Processing. Vol. 8.
Martens, James. 2010. Deep Learning via Hessian-Free Optimization.” In Proceedings of the 27th International Conference on International Conference on Machine Learning, 735–42. ICML’10. USA: Omnipress.
Martens, James, and Ilya Sutskever. 2011. Learning Recurrent Neural Networks with Hessian-Free Optimization.” In Proceedings of the 28th International Conference on International Conference on Machine Learning, 1033–40. ICML’11. USA: Omnipress.
———. 2012. Training Deep and Recurrent Networks with Hessian-Free Optimization.” In Neural Networks: Tricks of the Trade, 479–535. Lecture Notes in Computer Science. Springer.
Mattingley, J., and S. Boyd. 2010. Real-Time Convex Optimization in Signal Processing.” IEEE Signal Processing Magazine 27 (3): 50–61.
Mcleod, Doug, Garry Emmerson, Robert Kohn, and Geoff Kingston (universit. 2008. “Finding the Invisible Hand: An Objective Model of Financial Markets.”
McMahan, H. Brendan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, et al. 2013. Ad Click Prediction: A View from the Trenches.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1222–30. KDD ’13. New York, NY, USA: ACM.
Mei, Song, Yu Bai, and Andrea Montanari. 2016. The Landscape of Empirical Risk for Non-Convex Losses.” arXiv:1607.06534 [Stat], July.
Metz, Luke, Niru Maheswaranathan, C. Daniel Freeman, Ben Poole, and Jascha Sohl-Dickstein. 2020. Tasks, Stability, Architecture, and Compute: Training More Effective Learned Optimizers, and Using Them to Train Themselves.” arXiv:2009.11243 [Cs, Stat], September.
Mitliagkas, Ioannis, Ce Zhang, Stefan Hadjis, and Christopher Ré. 2016. Asynchrony Begets Momentum, with an Application to Deep Learning.” arXiv:1605.09774 [Cs, Math, Stat], May.
Molnar, Christoph. 2021. Shapley Values.” In Interpretable Machine Learning.
Nesterov, Y. 2012. Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems.” SIAM Journal on Optimization 22 (2): 341–62.
Nesterov, Yu. 2007. Accelerating the Cubic Regularization of Newton’s Method on Convex Problems.” Mathematical Programming 112 (1): 159–81.
———. 2012. Gradient Methods for Minimizing Composite Functions.” Mathematical Programming 140 (1): 125–61.
Nesterov, Yurii. 2004. Introductory Lectures on Convex Optimization. Vol. 87. Applied Optimization. Boston, MA: Springer US.
Nguyen, Lam M., Jie Liu, Katya Scheinberg, and Martin Takáč. 2017. Stochastic Recursive Gradient Algorithm for Nonconvex Optimization.” arXiv:1705.07261 [Cs, Math, Stat], May.
Nocedal, Jorge, and S. Wright. 2006. Numerical Optimization. 2nd ed. Springer Series in Operations Research and Financial Engineering. New York: Springer-Verlag.
Parikh, Neal, and Stephen Boyd. 2014. Proximal Algorithms. Vol. 1.
Patel, Vivak. 2017. On SGD’s Failure in Practice: Characterizing and Overcoming Stalling.” arXiv:1702.00317 [Cs, Math, Stat], February.
Pilanci, Mert, and Martin J. Wainwright. 2016. Iterative Hessian Sketch: Fast and Accurate Solution Approximation for Constrained Least-Squares.” Journal of Machine Learning Research 17 (53): 1–38.
Polyak, B. T., and A. B. Juditsky. 1992. Acceleration of Stochastic Approximation by Averaging.” SIAM Journal on Control and Optimization 30 (4): 838–55.
Portnoy, Stephen, and Roger Koenker. 1997. The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error Versus Absolute-Error Estimators.” Statistical Science 12 (4): 279–300.
Queipo, Nestor V., Raphael T. Haftka, Wei Shyy, Tushar Goel, Rajkumar Vaidyanathan, and P. Kevin Tucker. 2005. Surrogate-Based Analysis and Optimization.” Progress in Aerospace Sciences 41 (1): 1–28.
Reddi, Sashank J., Ahmed Hefny, Suvrit Sra, Barnabas Poczos, and Alex Smola. 2016. Stochastic Variance Reduction for Nonconvex Optimization.” In PMLR, 1603:314–23.
Robbins, Herbert, and Sutton Monro. 1951. A Stochastic Approximation Method.” The Annals of Mathematical Statistics 22 (3): 400–407.
Robbins, H., and D. Siegmund. 1971. A Convergence Theorem for Non Negative Almost Supermartingales and Some Applications.” In Optimizing Methods in Statistics, edited by Jagdish S. Rustagi, 233–57. Academic Press.
Rosset, Saharon, and Ji Zhu. 2007. Piecewise Linear Regularized Solution Paths.” The Annals of Statistics 35 (3): 1012–30.
Ruder, Sebastian. 2016. An Overview of Gradient Descent Optimization Algorithms.” arXiv:1609.04747 [Cs], September.
Ruppert, David. 1985. A Newton-Raphson Version of the Multivariate Robbins-Monro Procedure.” The Annals of Statistics 13 (1): 236–45.
Sagun, Levent, V. Ugur Guney, Gerard Ben Arous, and Yann LeCun. 2014. Explorations on High Dimensional Landscapes.” arXiv:1412.6615 [Cs, Stat], December.
Salimans, Tim, and Diederik P Kingma. 2016. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 901–1. Curran Associates, Inc.
Sashank J. Reddi, Suvrit Sra, Barnabás Póczós, and Alex Smola. 1995. Stochastic Frank-Wolfe Methods for Nonconvex Optimization.”
Schmidt, Mark, Glenn Fung, and Romer Rosales. 2009. “Optimization Methods for L1-Regularization.” University of British Columbia, Technical Report TR-2009 19.
Schraudolph, Nicol N. 2002. Fast Curvature Matrix-Vector Products for Second-Order Gradient Descent.” Neural Computation 14 (7): 1723–38.
Shalev-Shwartz, Shai, and Ambuj Tewari. 2011. Stochastic Methods for L1-Regularized Loss Minimization.” Journal of Machine Learning Research 12 (July): 1865–92.
Shi, Hao-Jun Michael, Shenyinying Tu, Yangyang Xu, and Wotao Yin. 2016. A Primer on Coordinate Descent Algorithms.” arXiv:1610.00040 [Math, Stat], September.
Simon, Noah, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2011. Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent.” Journal of Statistical Software 39 (5).
Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework.” arXiv:1512.04011 [Cs], December.
Spall, J. C. 2000. Adaptive Stochastic Approximation by the Simultaneous Perturbation Method.” IEEE Transactions on Automatic Control 45 (10): 1839–53.
Straszak, Damian, and Nisheeth K. Vishnoi. 2016. IRLS and Slime Mold: Equivalence and Convergence.” arXiv:1601.02712 [Cs, Math, Stat], January.
Sun, Shiliang, Zehui Cao, Han Zhu, and Jing Zhao. 2019. A Survey of Optimization Methods from a Machine Learning Perspective.” arXiv:1906.06821 [Cs, Math, Stat], October.
Thisted, Ronald A. 1997. [The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error Versus Absolute-Error Estimators]: Comment.” Statistical Science 12 (4): 296–98.
Townsend, James, Niklas Koep, and Sebastian Weichwald. 2016. Pymanopt: A Python Toolbox for Optimization on Manifolds Using Automatic Differentiation.” Journal of Machine Learning Research 17 (137): 1–5.
Vishwanathan, S.V. N., Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. “Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods.” In Proceedings of the 23rd International Conference on Machine Learning.
Wainwright, Martin J. 2014. Structured Regularizers for High-Dimensional Problems: Statistical and Computational Issues.” Annual Review of Statistics and Its Application 1 (1): 233–53.
Wibisono, Andre, and Ashia C. Wilson. 2015. On Accelerated Methods in Optimization.” arXiv:1509.03616 [Math], September.
Wibisono, Andre, Ashia C. Wilson, and Michael I. Jordan. 2016. A Variational Perspective on Accelerated Methods in Optimization.” Proceedings of the National Academy of Sciences 113 (47): E7351–58.
Wipf, David, and Srikantan Nagarajan. 2016. Iterative Reweighted L1 and L2 Methods for Finding Sparse Solution.” Microsoft Research, July.
Wright, S. J., R. D. Nowak, and M. A. T. Figueiredo. 2009. Sparse Reconstruction by Separable Approximation.” IEEE Transactions on Signal Processing 57 (7): 2479–93.
Wright, Stephen J., and Benjamin Recht. 2021. Optimization for Data Analysis. New York: Cambridge University Press.
Wu, Tong Tong, and Kenneth Lange. 2008. Coordinate Descent Algorithms for Lasso Penalized Regression.” The Annals of Applied Statistics 2 (1): 224–44.
Xu, Wei. 2011. Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent.” arXiv:1107.2490 [Cs], July.
Yang, Jiyan, Xiangrui Meng, and Michael W. Mahoney. 2015. Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments.” arXiv:1502.03032 [Cs, Math, Stat], February.
Yun, Sangwoon, and Kim-Chuan Toh. 2009. A Coordinate Gradient Descent Method for ℓ 1-Regularized Convex Minimization.” Computational Optimization and Applications 48 (2): 273–307.
Zhang, Lijun, Tianbao Yang, Rong Jin, and Zhi-Hua Zhou. 2015. Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach.” arXiv:1511.03766 [Cs], November.
Zhang, Xiao, Lingxiao Wang, and Quanquan Gu. 2017. Stochastic Variance-Reduced Gradient Descent for Low-Rank Matrix Recovery from Linear Measurements.” arXiv:1701.00481 [Stat], January.
Zinkevich, Martin. 2003. Online Convex Programming and Generalized Infinitesimal Gradient Ascent.” In Proceedings of the Twentieth International Conference on International Conference on Machine Learning, 928–35. ICML’03. Washington, DC, USA: AAAI Press.
Zinkevich, Martin, Markus Weimer, Lihong Li, and Alex J. Smola. 2010. Parallelized Stochastic Gradient Descent.” In Advances in Neural Information Processing Systems 23, edited by J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 2595–2603. Curran Associates, Inc.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.