# Neural nets with implicit layers

Also, declarative networks, bi-level optimization and other ingenious uses of the implicit function theorem

December 8, 2020 — June 28, 2023

Yonina Eldar on Model-Based Deep Learning

In our lab, we are working on model-based deep learning, where the design of learning-based algorithms is based on prior domain knowledge. This approach allows to integrate models and other knowledge about the problem into both the architecture and training process of deep networks. This leads to efficient, high-performance and yet interpretable neural networks which can be employed in a variety of tasks in signal and image processing. Model-based networks require far fewer parameters than their black-box counterparts, generalize better, and can be trained from much less data. In some cases, our networks are trained on a single image, or only on the input itself so that effectively they are unsupervised.

## 1 Unrolling algorithms

Turning iterations into layers. Connection to Implicit NNs.

A classic is Gregor and LeCun (2010), and a number of others related to this idea intermittently appear (Adler and Öktem 2018; Borgerding and Schniter 2016; Gregor and LeCun 2010; Sulam et al. 2020).

## 2 Incoming

- Jonas Adler, Learning to reconstruct
- Jonas Adler, Accelerated Forward-Backward Optimization using Deep Learning

## 3 References

*IEEE Transactions on Medical Imaging*.

*arXiv:2105.05210 [Math]*.

*arXiv:1612.01183 [Cs, Math]*.

*Proceedings of the 27th International Conference on Machine Learning (ICML-10)*.

*arXiv:1105.5307 [Cs]*.

*IEEE Signal Processing Magazine*.

*2021 IEEE Data Science and Learning Workshop (DSLW)*.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*.