Sparse coding with learnable dictionaries



Adaptive dictionaries for sparse coding. How is this different from matrix factorisation, you ask? It is not. AFAICT these are emphases of the same thing.

(Bruno A. Olshausen and Field 1996) kicked this area off by arguing sparse coding tricks are revealing of what the brain does.

For a walk through of one version of this, see Theano example of dictionary learning by Daniel LaCombe, who bases his version on (Ngiam et al. 2011; HyvΓ€rinen, Hurri, and Hoyer 2009; Hahn et al. 2015).

See (Mairal, Bach, and Ponce 2014) for some a summary of methods to 2009 in basis learning.

Question: how do you do adaptive sparse coding in a big data / offline setting?

TRANSFORM LEARNING: Sparse Representations at Scale.

We have proposed several methods for batch learning of square or overcomplete sparsifying transforms from data. We have also investigated specific structures for these transforms such as double sparsity, union-of-transforms, and filter bank structures, which enable their efficient learning or usage. Apart from batch transform learning, our group has investigated methods for online learning of sparsifying transforms, which are particularly useful for big data or real-time applications.

Huh.

Codings with desired invariances

I would like to find bases robust against certain transformations, especially phase/shift-robust codings, although doing this naively can be computationally expensive outside of certain convenient bases. (Sorry, that’s not very clear; I need to return to this section to polish it up. πŸ—)

One method is β€œShift Invariant Sparse coding”, (Blumensath and Davies 2004) and there are various extensions and approximations out there. (Grosse et al. (2007) etc) One way is to include multiple shifted copies of your atoms, another is to actually shift them in a separate optimisation stage. Both these get annoying in the time domain for various reasons. (Lattner, Dorfler, and Arzt 2019) presents an adaptive sparse coding method preserving desired invariants.

spams does a huge variety of off-the-shelf sparse codings, although none of them are flexible. Nonetheless it does some neat things fast.

SPAMS (SPArse Modeling Software) is an optimization toolbox for solving various sparse estimation problems.

  • Dictionary learning and matrix factorization (NMF, sparse PCA,…)
  • Solving sparse decomposition problems with LARS, coordinate descent, OMP, SOMP, proximal methods
  • Solving structured sparse decomposition problems (l1/l2, l1/linf, sparse group lasso, tree-structured regularization, structured sparsity with overlapping groups,…).

References

Blumensath, Thomas, and Mike Davies. 2004. β€œOn Shift-Invariant Sparse Coding.” In Independent Component Analysis and Blind Signal Separation, edited by Carlos G. Puntonet and Alberto Prieto, 3195:1205–12. Berlin, Heidelberg: Springer Berlin Heidelberg.
Charles, Adam, Aurele Balavoine, and Christopher Rozell. 2016. β€œDynamic Filtering of Time-Varying Sparse Signals via L1 Minimization.” IEEE Transactions on Signal Processing 64 (21): 5644–56.
Garg, Sahil, Irina Rish, Guillermo Cecchi, and Aurelie Lozano. 2017. β€œNeurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World.” In arXiv:1701.06106 [Cs, Stat].
Gehler, Peter Vincent, and Sebastian Nowozin. n.d. β€œLet the Kernel Figure It Out; Principled Learning of Pre-Processing for Kernel Classifiers.”
Gregor, Karol, and Yann LeCun. 2010. β€œLearning fast approximations of sparse coding.” In Proceedings of the 27th International Conference on Machine Learning (ICML-10), 399–406.
β€”β€”β€”. 2011. β€œEfficient Learning of Sparse Invariant Representations.” arXiv:1105.5307 [Cs], May.
Grosse, Roger, Rajat Raina, Helen Kwong, and Andrew Y. Ng. 2007. β€œShift-Invariant Sparse Coding for Audio Classification.” In The Twenty-Third Conference on Uncertainty in Artificial Intelligence (UAI2007), 9:8.
Hahn, William Edward, Stephanie Lewkowitz, Daniel C. Lacombe, and Elan Barenholtz. 2015. β€œDeep Learning Human Actions from Video via Sparse Filtering and Locally Competitive Algorithms.” Multimedia Tools and Applications 74 (22): 10097–110.
Henaff, Mikael, Kevin Jarrett, Koray Kavukcuoglu, and Yann LeCun. 2011. β€œUnsupervised Learning of Sparse Features for Scalable Audio Classification.” In ISMIR.
HyvΓ€rinen, Aapo, and Patrik Hoyer. 2000. β€œEmergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces.” Neural Computation 12 (7): 1705–20.
HyvΓ€rinen, Aapo, Jarmo Hurri, and Patrick O. Hoyer. 2009. Natural Image Statistics: A Probabilistic Approach to Early Computational Vision. Vol. 39. Springer Science & Business Media.
Knudson, Karin C, Jacob Yates, Alexander Huk, and Jonathan W Pillow. 2014. β€œInferring Sparse Representations of Continuous Signals with Continuous Orthogonal Matching Pursuit.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 27:1215–23. Curran Associates, Inc.
Kreutz-Delgado, Kenneth, Joseph F. Murray, Bhaskar D. Rao, Kjersti Engan, Te-Won Lee, and Terrence J. Sejnowski. 2003. β€œDictionary Learning Algorithms for Sparse Representation.” Neural Computation 15 (2): 349–96.
Lattner, Stefan, Monika Dorfler, and Andreas Arzt. 2019. β€œLearning Complex Basis Functions for Invariant Representations of Audio.” In Proceedings of the 20th Conference of the International Society for Music Information Retrieval, 8.
Lewicki, M S, and T J Sejnowski. 1999. β€œCoding Time-Varying Signals Using Sparse, Shift-Invariant Representations.” In NIPS, 11:730–36. Denver, CO: MIT Press.
Lewicki, Michael S., and Terrence J. Sejnowski. 2000. β€œLearning Overcomplete Representations.” Neural Computation 12 (2): 337–65.
Mairal, Julien, Francis Bach, and Jean Ponce. 2014. Sparse Modeling for Image and Vision Processing. Vol. 8.
Mairal, Julien, Francis Bach, Jean Ponce, and Guillermo Sapiro. 2009. β€œOnline Dictionary Learning for Sparse Coding.” In Proceedings of the 26th Annual International Conference on Machine Learning, 689–96. ICML ’09. New York, NY, USA: ACM.
β€”β€”β€”. 2010. β€œOnline Learning for Matrix Factorization and Sparse Coding.” The Journal of Machine Learning Research 11: 19–60.
Meinshausen, Nicolai, and Bin Yu. 2009. β€œLasso-Type Recovery of Sparse Representations for High-Dimensional Data.” The Annals of Statistics 37 (1): 246–70.
Ngiam, Jiquan, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, and Andrew Y. Ng. 2011. β€œSparse Filtering.” In Advances in Neural Information Processing Systems 24, edited by J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, 1125–33. Curran Associates, Inc.
Olshausen, B. A., and D. J. Field. 1996. β€œNatural image statistics and efficient coding.” Network (Bristol, England) 7 (2): 333–39.
Olshausen, Bruno A., and David J. Field. 1996. β€œEmergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images.” Nature 381 (6583): 607–9.
Olshausen, Bruno A, and David J Field. 2004. β€œSparse Coding of Sensory Inputs.” Current Opinion in Neurobiology 14 (4): 481–87.
Qian, Wei, Bin Hong, Deng Cai, Xiaofei He, and Xuelong Li. 2016. β€œNon-Negative Matrix Factorization with Sinkhorn Distance.” In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, 1960–66. IJCAI’16. New York, New York, USA: AAAI Press.
Rubinstein, Ron, A.M. Bruckstein, and Michael Elad. 2010. β€œDictionaries for Sparse Representation Modeling.” Proceedings of the IEEE 98 (6): 1045–57.
Scetbon, Meyer, Marco Cuturi, and Gabriel PeyrΓ©. 2021. β€œLow-Rank Sinkhorn Factorization.” In Proceedings of the 38th International Conference on Machine Learning, 9344–54. PMLR.
Schmitz, Morgan A., Matthieu Heitz, Nicolas Bonneel, Fred NgolΓ¨, David Coeurjolly, Marco Cuturi, Gabriel PeyrΓ©, and Jean-Luc Starck. 2018. β€œWasserstein Dictionary Learning: Optimal Transport-Based Unsupervised Nonlinear Dictionary Learning.” SIAM Journal on Imaging Sciences 11 (1): 643–78.
Shen, Chunhua, and Hanxi Li. 2010. β€œOn the Dual Formulation of Boosting Algorithms.” IEEE Transactions on Pattern Analysis and Machine Intelligence 32 (12): 2216–31.
Simoncelli, Eero P, and Bruno A Olshausen. 2001. β€œNatural Image Statistics and Neural Representation.” Annual Review of Neuroscience 24 (1): 1193–1216.
Soh, Yong Sheng, and Venkat Chandrasekaran. 2017. β€œA Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers.” arXiv:1701.01207 [Cs, Math, Stat], January.
Yaghoobi, M., Sangnam Nam, R. Gribonval, and M.E. Davies. 2013. β€œConstrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling.” IEEE Transactions on Signal Processing 61 (9): 2341–55.
Zhang, Stephen Y. 2021. β€œA Unified Framework for Non-Negative Matrix and Tensor Factorisations with a Smoothed Wasserstein Loss.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.