Neural nets with implicit layers

Also, declarative networks, bi-level optimization and other ingenious uses of the implicit function theorem



Yonina Eldar on Model-Based Deep Learning

In our lab, we are working on model-based deep learning, where the design of learning-based algorithms is based on prior domain knowledge. This approach allows to integrate models and other knowledge about the problem into both the architecture and training process of deep networks. This leads to efficient, high-performance and yet interpretable neural networks which can be employed in a variety of tasks in signal and image processing. Model-based networks require far fewer parameters than their black-box counterparts, generalize better, and can be trained from much less data. In some cases, our networks are trained on a single image, or only on the input itself so that effectively they are unsupervised.

Unrolling algorithms

Turning iterations into layers. Connection to Implicit NNs.

A classic is Gregor and LeCun (2010), and a number of others related to this idea intermittently appear (Adler and Γ–ktem 2018; Borgerding and Schniter 2016; Gregor and LeCun 2010; Sulam et al. 2020).

References

Adler, Jonas, and Ozan Γ–ktem. 2018. β€œLearned Primal-Dual Reconstruction.” IEEE Transactions on Medical Imaging 37 (6): 1322–32.
Banert, Sebastian, Jevgenija Rudzusika, Ozan Γ–ktem, and Jonas Adler. 2021. β€œAccelerated Forward-Backward Optimization Using Deep Learning.” arXiv:2105.05210 [Math], May.
Borgerding, Mark, and Philip Schniter. 2016. β€œOnsager-Corrected Deep Networks for Sparse Linear Inverse Problems.” arXiv:1612.01183 [Cs, Math], December.
Gregor, Karol, and Yann LeCun. 2010. β€œLearning fast approximations of sparse coding.” In Proceedings of the 27th International Conference on Machine Learning (ICML-10), 399–406.
β€”β€”β€”. 2011. β€œEfficient Learning of Sparse Invariant Representations.” arXiv:1105.5307 [Cs], May.
Monga, Vishal, Yuelong Li, and Yonina C. Eldar. 2021. β€œAlgorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing.” IEEE Signal Processing Magazine 38 (2): 18–44.
Satorras, Victor Garcia, and Max Welling. 2021. β€œNeural Enhanced Belief Propagation on Factor Graphs.” In. arXiv.
Shlezinger, Nir, Jay Whang, Yonina C. Eldar, and Alexandros G. Dimakis. 2021. β€œModel-Based Deep Learning: Key Approaches and Design Guidelines.” In 2021 IEEE Data Science and Learning Workshop (DSLW), 1–6.
β€”β€”β€”. 2022. β€œModel-Based Deep Learning.” arXiv.
Sulam, Jeremias, Aviad Aberdam, Amir Beck, and Michael Elad. 2020. β€œOn Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence 42 (8): 1968–80.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.