A unifying framework for various networks, including neural ODEs, where our layers are not simple forward operations but who exacluation is represented as some optimisation problem.

NB: This is different to the implicit *representation* method.
Since implicit layers and implicit representation layers also occur in the same problems (such as ML PDES) this avoidable terminological confusion will haunt us.

To learn: connection to fixed point (Granas and Dugundji 2003) theory.

## The implicit function theorem in learning

A beautiful explanation of what is special about differentiating systems at equilibrium is Blondel et al. (2021).

For further tutorial-form background, see the NeurIPS 2020 tutorial, Deep Implicit Layers - Neural ODEs, Deep Equilibrium Models, and Beyond, by Zico Kolter, David Duvenaud, and Matt Johnson or ADCME: Automatic Differentiation for Implicit Operators.

## Optimization layers

Differentiable Convex Optimization Layers introduces cvxpylayers:

Optimization layers add domain-specific knowledge or learnable hard constraints to machine learning models.. Many of these layers solve

convexandconstrainedoptimization problems of the form\[ \begin{array}{rl} x^{\star}(\theta)=\operatorname{arg min}_{x} & f(x ; \theta) \\ \text { subject to } g(x ; \theta) & \leq 0 \\ h(x ; \theta) & =0 \end{array} \]

with parameters ΞΈ, objective f, and constraint functions g,h and do end-to-end learning through them with respect to ΞΈ.

In this tutorial we introduce our new library cvxpylayers for easily creating differentiable new convex optimization layers. This lets you express your layer with the CVXPY domain specific language as usual and then export the CVXPY object to an efficient batched and differentiable layer with a single line of code. This project turns

everyconvex optimization problem expressed in CVXPY into a differentiable layer.

## Unrolling algorithms

The classic one is Gregor and LeCun (2010), and a number of others related to thsi idea intermittently appear (Adler and Γktem 2018; Borgerding and Schniter 2016; Gregor and LeCun 2010; Sulam et al. 2020)

- Jonas Adler, Learning to reconstruct
- Jonas Adler, Accelerated Forward-Backward Optimization using Deep Learning

## Deep declarative networks

A different terminology, although AFAICT closely related technology, is used by Stephen Gould in Gould, Hartley, and Campbell (2019), under the banner of Deep Declarative Networks. Fun applications he highlights: robust losses in pooling layers, projection onto shapes, convex programming and warping, matching problems, (relaxed) graph alignment, noisy point-cloud surface reconstructionβ¦ (I am sitting in his seminar as I write this.) They implemented a ddn library (pytorch).

To follow up from that presentation: Learning basis decomposition, hyperparameter optimisationβ¦ Stephen relates these to deep declarative nets by discussing both problems as βbi-level optimisation problemsβ. Also discusses some minimax-like optimisations to βStackelberg gamesβ which are an optimisation problem embedded in game theory.

## Deep equilibrium networks

Related: Deep equilibrium networks (Bai, Kolter, and Koltun 2019; Bai, Koltun, and Kolter 2020). In this one we assume that the network has a single layer which is iterated, and then solve for a fixed point of that iterated layer; this turns out to be memory efficient and in fact powerful (you need to scale up the width of that magic layer up to make it match the effective depth of a non-iterative layer stack, but not so very much.)

Example code: locuslab/deq.

## Deep Ritz method

Fits here? (E, Han, and Jentzen 2017; E and Yu 2018; MΓΌller and Zeinhofer 2020) Or is it more of an nn-for-pdes thing?

## In practice

In general we are using autodiff to find the gradients of our systems. Writing custom gradients to exploit the efficiencies of implicit gradients: how do we do that in practice?

Overriding autodiff is surprisingly easy in jax: Custom derivative rules for JAX-transformable Python functions, including implicit functions. Blondel et al. (2021) adds some extra conveniences in the form of google/jaxopt: Hardware accelerated, batchable and differentiable optimizers in JAX..

Julia autodiff also allows convenient overrides, and in fact the community discourse around them is full of helpful tips.

## References

*IEEE Transactions on Medical Imaging*37 (6): 1322β32.

*Advances In Neural Information Processing Systems*.

*Mathematical Programming Computation*11 (1): 1β36.

*Proceedings of The 28th Conference on Learning Theory*, 40:113β49. Paris, France: PMLR.

*Advances in Neural Information Processing Systems*, 32:12.

*Advances in Neural Information Processing Systems*. Vol. 33.

*arXiv:2106.14342 [Cs, Stat]*, June.

*arXiv:2105.05210 [Math]*, May.

*arXiv:2105.15183 [Cs, Math, Stat]*, October.

*arXiv:1612.01183 [Cs, Math]*, December.

*Proceedings of the 31st International Conference on Neural Information Processing Systems*, 1014β24. NIPSβ17. Red Hook, NY, USA: Curran Associates Inc.

*International Conference on Artificial Intelligence and Statistics*, 318β26.

*Communications in Mathematics and Statistics*5 (4): 349β80.

*Communications in Mathematics and Statistics*6 (1): 1β12.

*The Implicit Function Theorem*. Springer.

*Fixed Point Theory*. Springer Monographs in Mathematics. New York, NY: Springer New York.

*Proceedings of the 27th International Conference on Machine Learning (ICML-10)*, 399β406.

*arXiv:1105.5307 [Cs]*, May.

*Inverse Problems*34 (1): 014004.

*Advances in Neural Information Processing Systems*. Vol. 33.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*42 (8): 1968β80.

## No comments yet. Why not leave one?