Sparse coding with learnable dictionaries
November 18, 2014 — March 2, 2023
Adaptive dictionaries for sparse coding. How is this different from matrix factorisation, you ask? It is not. AFAICT these are emphases of the same thing.
(Bruno A. Olshausen and Field 1996) kicked this area off by arguing sparse coding tricks reveal what the brain does.
For a walkthrough of one version of this, see Theano example of dictionary learning by Daniel LaCombe, who bases his version on (Ngiam et al. 2011; Hyvärinen, Hurri, and Hoyer 2009; Hahn et al. 2015).
See (Mairal, Bach, and Ponce 2014) for a summary of methods to 2009 in basis learning.
Question: how do you do adaptive sparse coding in a big data / offline setting?
TRANSFORM LEARNING: Sparse Representations at Scale.
We have proposed several methods for batch learning of square or overcomplete sparsifying transforms from data. We have also investigated specific structures for these transforms such as double sparsity, union-of-transforms, and filter bank structures, which enable their efficient learning or usage. Apart from batch transform learning, our group has investigated methods for online learning of sparsifying transforms, which are particularly useful for big data or real-time applications.
Huh.
0.1 Codings with desired invariances
I would like to find bases robust against certain transformations, especially phase/shift-robust codings, although doing this naively can be computationally expensive outside of certain convenient bases. (Sorry, that’s not very clear; I need to return to this section to polish it up. 🏗)
One method is “Shift Invariant Sparse coding”, (Blumensath and Davies 2004) and there are various extensions and approximations out there. (Grosse et al. (2007) etc) One way is to include multiple shifted copies of your atoms, another is to actually shift them in a separate optimisation stage. Both these get annoying in the time domain for various reasons. (Lattner, Dorfler, and Arzt 2019) presents an adaptive sparse coding method preserving desired invariants.
spams does a huge variety of off-the-shelf sparse codings, although none of them are flexible. Nonetheless, it does some neat things fast.
SPAMS (SPArse Modeling Software) is an optimization toolbox for solving various sparse estimation problems.
- Dictionary learning and matrix factorization (NMF, sparse PCA,…)
- Solving sparse decomposition problems with LARS, coordinate descent, OMP, SOMP, proximal methods
- Solving structured sparse decomposition problems (l1/l2, l1/linf, sparse group lasso, tree-structured regularization, structured sparsity with overlapping groups,…).