Compressed sensing and sampling

A fancy ways of counting zero



Higgledy-piggledy notes on the theme of exploiting sparsity to recover signals from few non-local measurements, given that we know they are nearly sparse, in a sense that will be made clear soon.

See also matrix factorisations, restricted isometry properties, Riesz bases…

Basic Compressed Sensing

I’ll follow the intro of (E. J. CandΓ¨s et al. 2011), which tries to unify many variants.

We attempt to recover a signal \(x_k\in \mathbb{R}^d\) from \(m\ll n\) measurements \(y_k\) of the form

\[y_k =\langle a_k, x\rangle + z_k,\, 1\leq k \leq m,\]

or, as a matrix equation,

\[ y = Ax + z \]

where \(A\) is the \(m \times d\) stacked measurement matrices, and the \(z\) terms denote i.i.d. measurement noise.

Now, if \(x\) is a sparse vector, and \(A\) satisfies a restricted isometry property or something then we can construct an estimate \(\hat{x}\) with small error by minimising

\[ \hat{x}=\min \|\dot{x}\|_1 \text{ subject to } \|A\dot{x}-y\|_2 < \varepsilon, \]

where \(\varepsilon> \|z\|_2^2.\)

In the lecture notes on restricted isometry properties, Candès and Tao talk about not vectors \(x\in \mathbb{R}^d\) but functions \(f:G \mapsto \mathbb{C}\) on Abelian groups like \(G=\mathbb{Z}/d\mathbb{Z},\) which is convenient for some phrasing, since then when I say my signal is \(s\)-sparse, which means that its support \(\operatorname{supp} \tilde{f}=S\subset G\) where \(|S|=s\).

In the finite-dimensional vector framing, we can talk about best sparse approximations \(x_s\) to non-sparse vectors, \(x\).

\[ x_s = \argmin_{\|\dot{x}\|_0\leq s} \|x-\dot{x}\|_2 \]

where all the coefficients apart from the \(s\) largest are zeroed.

The basic results find attractive convex problems with high probability in a nest of nastier ones. There are also greedy optimisation versions, which are formulated as above, but no longer necessarily a convex optimisation; instead, we talk about Orthogonal Matching Pursuit, Iterative Thresholding and some other stuff the details of which I do not yet know, which I think pops up in wavelets and sparse coding.

For all of these the results tend to be something like

with data \(y,\) the difference between my estimate of \(\hat{x}\) and \(\hat{x}_\text{oracle}\) is bounded by something-or-other where the oracle estimate is the one where you know ahead of time the set \(S=\operatorname{supp}(x)\).

CandΓ©s gives an example result

\[ \|\hat{x}-x\|_2 \leq C_0\frac{\|x-x_s\|_1}{\sqrt{s}} + C_1\varepsilon \]

conditional upon

\[ \delta_2s(A) < \sqrt{2} -1 \]

where this \(\delta_s(\cdot)\) gives the restricted isometry constant of a matrix, defined as the smallest constant such that \((1-\delta_s(A))\|x\|_2^2\leq \|Ax\|_2^2\leq (1+\delta_s(A))\|x\|_2^2\) for all \(s\)-sparse \())x\). That is, the measurement matrix does not change the norm of sparse signals β€œmuch”, and in particular, does not null them when \(\delta_s < 1.\)

This is not the strongest bound out there apparently, but for any of that form, those constants look frustrating.

Measuring the restricted isometry constant of a given measurement matrix is presumably hard, although I haven’t tried yet. But generating random matrices that have a certain RIC with high probability is easy; that’s a neat trick in this area.

Redundant compressed sensing

πŸ— For now see Frame theory.

Introductory texts

…Using random projections

Classic. Notes under low dimensional projections

…Using deterministic projections

Surely this is close to quasi monte carlo?

  • Dustin G. Mixon Achieving the Welch bound with difference sets

    I blogged about constructing harmonic frames using difference sets. We proved that such harmonic frames are equiangular tight frames, thereby having minimal coherence between columns. I concluded the entry by conjecturing that incoherent harmonic frames are as good for compressed sensing as harmonic frames whose rows were randomly drawn from the discrete Fourier transform (DFT) matrix

  • A variant on the compressed sensing of Yves Meyer

    recent work of Yves Meyer might be relevant:

    • A variant on the compressed sensing of Emmanuel Candes, Basarab Matei and Yves Meyer

    • Simple quasicrystals are sets of stable sampling, Basarab Matei and Yves Meyer

    These papers are interesting because their approach to compressed sensing is very different. Specifically, their sparse vectors are actually functions of compact support with sufficiently small Lebesgue measure. As such, concepts like conditioning are replaced with that of stable sampling, and the results must be interpreted in the context of functional analysis. The papers demonstrate that sampling frequencies according to a (deterministic) simple quasicrystal will uniquely determine sufficiently sparse functions, and furthermore, the sparsest function in the preimage can be recovered by L1-minimization provided it’s nonnegative.

Bayesian

Sparse Bayes can be tricky. See, perhaps, Bayesian Compressive Sensing.

Phase transitions

How well can you recover a matrix from a certain number of measurements? In obvious metrics there is a sudden jump in how well you do with increasing measurements for a given rank. This looks a lot like a physical phase transition, which is a known phenomenon in ML. Hmm.

Weird things to be classified

csgm, (Bora et al. 2017) compressed sensing using generative models, tries to find a model which is sparse with respect to… some manifold of the latent variables of… a generative model? or something?

Sparse FFT.

References

Achlioptas, Dimitris. 2003. β€œDatabase-Friendly Random Projections: Johnson-Lindenstrauss with Binary Coins.” Journal of Computer and System Sciences, Special Issue on PODS 2001, 66 (4): 671–87.
Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. β€œExtreme Compressive Sampling for Covariance Estimation.” arXiv:1506.00898 [Cs, Math, Stat], June.
Bach, Francis, Rodolphe Jenatton, and Julien Mairal. 2011. Optimization With Sparsity-Inducing Penalties. Foundations and Trends(r) in Machine Learning 1.0. Now Publishers Inc.
Baraniuk, Richard G. 2007. β€œCompressive Sensing.” IEEE Signal Processing Magazine 24 (4).
β€”β€”β€”. 2008. β€œSingle-Pixel Imaging via Compressive Sampling.” IEEE Signal Processing Magazine 25 (2): 83–91.
Baraniuk, Richard G., Volkan Cevher, Marco F. Duarte, and Chinmay Hegde. 2010. β€œModel-Based Compressive Sensing.” IEEE Transactions on Information Theory 56 (4): 1982–2001.
Baraniuk, Richard, Mark Davenport, Ronald DeVore, and Michael Wakin. 2008. β€œA Simple Proof of the Restricted Isometry Property for Random Matrices.” Constructive Approximation 28 (3): 253–63.
Baron, Dror, Shriram Sarvotham, and Richard G. Baraniuk. 2010. β€œBayesian Compressive Sensing via Belief Propagation.” IEEE Transactions on Signal Processing 58 (1): 269–80.
Bayati, Mohsen, and Andrea Montanari. 2011. β€œThe Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing.” IEEE Transactions on Information Theory 57 (2): 764–85.
Berger, Bonnie, Noah M. Daniels, and Y. William Yu. 2016. β€œComputational Biology in the 21st Century: Scaling with Compressive Algorithms.” Communications of the ACM 59 (8): 72–80.
Bian, W., and X. Chen. 2013. β€œWorst-Case Complexity of Smoothing Quadratic Regularization Methods for Non-Lipschitzian Optimization.” SIAM Journal on Optimization 23 (3): 1718–41.
Bingham, Ella, and Heikki Mannila. 2001. β€œRandom Projection in Dimensionality Reduction: Applications to Image and Text Data.” In Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 245–50. KDD ’01. New York, NY, USA: ACM.
Blanchard, Jeffrey D. 2013. β€œToward Deterministic Compressed Sensing.” Proceedings of the National Academy of Sciences 110 (4): 1146–47.
Bora, Ashish, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. 2017. β€œCompressed Sensing Using Generative Models.” In International Conference on Machine Learning, 537–46.
Borgerding, Mark, and Philip Schniter. 2016. β€œOnsager-Corrected Deep Networks for Sparse Linear Inverse Problems.” arXiv:1612.01183 [Cs, Math], December.
Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008a. β€œSparse Non-Negative Solution of a Linear System of Equations Is Unique.” In 3rd International Symposium on Communications, Control and Signal Processing, 2008. ISCCSP 2008, 762–67.
β€”β€”β€”. 2008b. β€œOn the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations.” IEEE Transactions on Information Theory 54 (11): 4813–20.
Cai, T. Tony, Guangwu Xu, and Jun Zhang. 2008. β€œOn Recovery of Sparse Signals via β„“1 Minimization.” arXiv:0805.0149 [Cs], May.
Cai, T. Tony, and Anru Zhang. 2015. β€œROP: Matrix Recovery via Rank-One Projections.” The Annals of Statistics 43 (1): 102–38.
CandΓ¨s, Emmanuel J. 2014. β€œMathematics of Sparsity (and Few Other Things).” ICM 2014 Proceedings, to Appear.
CandΓ¨s, Emmanuel J., and Mark A. Davenport. 2011. β€œHow Well Can We Estimate a Sparse Vector?” arXiv:1104.5246 [Cs, Math, Stat], April.
CandΓ¨s, Emmanuel J., Yonina C. Eldar, Deanna Needell, and Paige Randall. 2011. β€œCompressed Sensing with Coherent and Redundant Dictionaries.” Applied and Computational Harmonic Analysis 31 (1): 59–73.
CandΓ¨s, Emmanuel J., and Benjamin Recht. 2009. β€œExact Matrix Completion via Convex Optimization.” Foundations of Computational Mathematics 9 (6): 717–72.
CandΓ¨s, Emmanuel J., J. Romberg, and T. Tao. 2006a. β€œRobust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information.” IEEE Transactions on Information Theory 52 (2): 489–509.
CandΓ¨s, Emmanuel J., Justin K. Romberg, and Terence Tao. 2006b. β€œStable Signal Recovery from Incomplete and Inaccurate Measurements.” Communications on Pure and Applied Mathematics 59 (8): 1207–23.
CandΓ¨s, Emmanuel J., and Terence Tao. 2006. β€œNear-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?” IEEE Transactions on Information Theory 52 (12): 5406–25.
β€”β€”β€”. 2008. β€œThe Uniform Uncertainty Principle and Compressed Sensing.”
CandΓ¨s, Emmanuel J., and M.B. Wakin. 2008. β€œAn Introduction To Compressive Sampling.” IEEE Signal Processing Magazine 25 (2): 21–30.
CandΓ¨s, Emmanuel, and Terence Tao. 2005. β€œDecoding by Linear Programming.” IEEE Transactions on Information Theory 51 (12): 4203–15.
Carmi, Avishy Y. 2013. β€œCompressive System Identification: Sequential Methods and Entropy Bounds.” Digital Signal Processing 23 (3): 751–70.
β€”β€”β€”. 2014. β€œCompressive System Identification.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281–324. Signals and Communication Technology. Springer Berlin Heidelberg.
Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. β€œSparse Signal Recovery Using Markov Random Fields.” In Advances in Neural Information Processing Systems, 257–64. Curran Associates, Inc.
Chartrand, R., and Wotao Yin. 2008. β€œIteratively Reweighted Algorithms for Compressive Sensing.” In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008, 3869–72.
Chen, Xiaojun. 2012. β€œSmoothing Methods for Nonsmooth, Nonconvex Minimization.” Mathematical Programming 134 (1): 71–99.
Chen, Xiaojun, and Weijun Zhou. 2013. β€œConvergence of the Reweighted β„“.” Computational Optimization and Applications 59 (1-2): 47–61.
Chretien, Stephane. 2008. β€œAn Alternating L1 Approach to the Compressed Sensing Problem.” arXiv:0809.0660 [Stat], September.
Dasgupta, Sanjoy. 2000. β€œExperiments with Random Projection.” In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, 143–51. UAI’00. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Dasgupta, Sanjoy, and Anupam Gupta. 2003. β€œAn Elementary Proof of a Theorem of Johnson and Lindenstrauss.” Random Structures & Algorithms 22 (1): 60–65.
Dasgupta, Sanjoy, Daniel Hsu, and Nakul Verma. 2006. β€œA Concentration Theorem for Projections.” In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, 114–21. UAI’06. Arlington, Virginia, USA: AUAI Press.
Daubechies, I., M. Defrise, and C. De Mol. 2004. β€œAn Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint.” Communications on Pure and Applied Mathematics 57 (11): 1413–57.
Daubechies, Ingrid, Ronald DeVore, Massimo Fornasier, and C. SiΜ‡nan GΓΌntΓΌrk. 2010. β€œIteratively Reweighted Least Squares Minimization for Sparse Recovery.” Communications on Pure and Applied Mathematics 63 (1): 1–38.
DeVore, Ronald A. 1998. β€œNonlinear Approximation.” Acta Numerica 7 (January): 51–150.
Diaconis, Persi, and David Freedman. 1984. β€œAsymptotics of Graphical Projection Pursuit.” The Annals of Statistics 12 (3): 793–815.
Donoho, D. L., M. Elad, and V. N. Temlyakov. 2006. β€œStable Recovery of Sparse Overcomplete Representations in the Presence of Noise.” IEEE Transactions on Information Theory 52 (1): 6–18.
Donoho, David L. 2006. β€œCompressed Sensing.” IEEE Transactions on Information Theory 52 (4): 1289–1306.
Donoho, David L., and Michael Elad. 2003. β€œOptimally Sparse Representation in General (Nonorthogonal) Dictionaries via β„“1 Minimization.” Proceedings of the National Academy of Sciences 100 (5): 2197–2202.
Donoho, David L., A. Maleki, and A. Montanari. 2010. β€œMessage Passing Algorithms for Compressed Sensing: I. Motivation and Construction.” In 2010 IEEE Information Theory Workshop (ITW), 1–5.
Donoho, David L., Arian Maleki, and Andrea Montanari. 2009a. β€œMessage-Passing Algorithms for Compressed Sensing.” Proceedings of the National Academy of Sciences 106 (45): 18914–19.
β€”β€”β€”. 2009b. β€œMessage Passing Algorithms for Compressed Sensing: II. Analysis and Validation.” In 2010 IEEE Information Theory Workshop (ITW), 1–5.
Duarte, Marco F., and Richard G. Baraniuk. 2013. β€œSpectral Compressive Sensing.” Applied and Computational Harmonic Analysis 35 (1): 111–29.
Flammia, Steven T., David Gross, Yi-Kai Liu, and Jens Eisert. 2012. β€œQuantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators.” New Journal of Physics 14 (9): 095022.
Foygel, Rina, and Nathan Srebro. 2011. β€œFast-Rate and Optimistic-Rate Error Bounds for L1-Regularized Regression.” arXiv:1108.0373 [Math, Stat], August.
Freund, Yoav, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. 2007. β€œLearning the Structure of Manifolds Using Random Projections.” In Advances in Neural Information Processing Systems, 473–80.
Giryes, R., G. Sapiro, and A. M. Bronstein. 2016. β€œDeep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?” IEEE Transactions on Signal Processing 64 (13): 3444–57.
Graff, Christian G., and Emil Y. Sidky. 2015. β€œCompressive Sensing in Medical Imaging.” Applied Optics 54 (8): C23–44.
Hall, Peter, and Ker-Chau Li. 1993. β€œOn Almost Linearity of Low Dimensional Projections from High Dimensional Data.” The Annals of Statistics 21 (2): 867–89.
Harchaoui, Zaid, Anatoli Juditsky, and Arkadi Nemirovski. 2015. β€œConditional Gradient Algorithms for Norm-Regularized Smooth Convex Optimization.” Mathematical Programming 152 (1-2): 75–112.
Hassanieh, Haitham, Piotr Indyk, Dina Katabi, and Eric Price. 2012. β€œNearly Optimal Sparse Fourier Transform.” In Proceedings of the Forty-Fourth Annual ACM Symposium on Theory of Computing, 563–78. STOC ’12. New York, NY, USA: ACM.
Hassanieh, H., P. Indyk, D. Katabi, and E. Price. 2012. β€œSimple and Practical Algorithm for Sparse Fourier Transform.” In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, 1183–94. Proceedings. Kyoto, Japan: Society for Industrial and Applied Mathematics.
Hegde, Chinmay, and Richard G. Baraniuk. 2012. β€œSignal Recovery on Incoherent Manifolds.” IEEE Transactions on Information Theory 58 (12): 7204–14.
Hormati, A., O. Roy, Y.M. Lu, and M. Vetterli. 2010. β€œDistributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications.” IEEE Transactions on Signal Processing 58 (3): 1095–1109.
Hoyer, Patrik O. n.d. β€œNon-Negative Matrix Factorization with Sparseness Constraints.” Journal of Machine Learning Research 5 (9): 1457–69.
Jaggi, Martin. 2013. β€œRevisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization.” In Journal of Machine Learning Research, 427–35.
Jung, Alexander, Reinhard Heckel, Helmut BΓΆlcskei, and Franz Hlawatsch. 2013. β€œCompressive Nonparametric Graphical Model Selection For Time Series.” arXiv:1311.3257 [Stat], November.
KabΓ‘n, Ata. 2014. β€œNew Bounds on Compressive Linear Least Squares Regression.” In Journal of Machine Learning Research, 448–56.
Kim, Daeun, and Justin P. Haldar. 2016. β€œGreedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery.” Signal Processing 125 (August): 274–89.
Lahiri, Subhaneil, Peiran Gao, and Surya Ganguli. 2016. β€œRandom Projections of Random Manifolds.” arXiv:1607.04331 [Cs, q-Bio, Stat], July.
Launay, Julien, Iacopo Poli, FranΓ§ois Boniface, and Florent Krzakala. 2020. β€œDirect Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures.” In Advances in Neural Information Processing Systems, 33:15.
Li, Ping, Trevor J. Hastie, and Kenneth W. Church. 2006. β€œVery Sparse Random Projections.” In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 287–96. KDD ’06. New York, NY, USA: ACM.
Li, Yingying, and Stanley Osher. 2009. β€œCoordinate Descent Optimization for β„“ 1 Minimization with Application to Compressed Sensing; a Greedy Algorithm.” Inverse Problems and Imaging 3 (3): 487–503.
Matei, Basarab, and Yves Meyer. 2010. β€œSimple Quasicrystals Are Sets of Stable Sampling.” Complex Variables and Elliptic Equations 55 (8-10): 947–64.
β€”β€”β€”. n.d. β€œA Variant on the Compressed Sensing of Emmanuel Candes.”
Mishali, Moshe, and Yonina C. Eldar. 2010. β€œFrom Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals.” IEEE Journal of Selected Topics in Signal Processing 4 (2): 375–91.
Montanari, Andrea. 2012. β€œGraphical Models Concepts in Compressed Sensing.” Compressed Sensing: Theory and Applications, 394–438.
Mousavi, Ali, and Richard G. Baraniuk. 2017. β€œLearning to Invert: Signal Recovery via Deep Convolutional Networks.” In ICASSP.
Needell, D., and J. A. Tropp. 2008. β€œCoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples.” arXiv:0803.2392 [Cs, Math], March.
Oka, A, and L. Lampe. 2008. β€œCompressed Sensing of Gauss-Markov Random Field with Wireless Sensor Networks.” In 5th IEEE Sensor Array and Multichannel Signal Processing Workshop, 2008. SAM 2008, 257–60.
Olshausen, B. A., and D. J. Field. 1996. β€œNatural image statistics and efficient coding.” Network (Bristol, England) 7 (2): 333–39.
Olshausen, Bruno A, and David J Field. 2004. β€œSparse Coding of Sensory Inputs.” Current Opinion in Neurobiology 14 (4): 481–87.
Oxvig, Christian Schou, Thomas Arildsen, and Torben Larsen. 2017. β€œGeneralized Approximate Message Passing: Relations and Derivations.” Aalborg University.
Pawar, Sameer, and Kannan Ramchandran. 2015. β€œA Robust Sub-Linear Time R-FFAST Algorithm for Computing a Sparse DFT.” arXiv:1501.00320 [Cs, Math], January.
Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. β€œExploiting Statistical Dependencies in Sparse Representations for Signal Recovery.” IEEE Transactions on Signal Processing 60 (5): 2286–2303.
Qiuyun Zou, Haochuan Zhang, Chao-Kai Wen, Shi Jin, and Rong Yu. 2018. β€œConcise Derivation for Generalized Approximate Message Passing Using Expectation Propagation.” IEEE Signal Processing Letters 25 (12): 1835–39.
Rangan, Sundeep. 2011. β€œGeneralized Approximate Message Passing for Estimation with Random Linear Mixing.” In 2011 IEEE International Symposium on Information Theory Proceedings, 2168–72. St.Β Petersburg, Russia: IEEE.
Ravishankar, Saiprasad, and Yoram Bresler. 2015. β€œEfficient Blind Compressed Sensing Using Sparsifying Transforms with Convergence Guarantees and Application to MRI.” arXiv:1501.02923 [Cs, Stat], January.
Ravishankar, S., and Y. Bresler. 2015. β€œSparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees.” IEEE Transactions on Signal Processing 63 (9): 2389–2404.
Rish, Irina, and Genady Grabarnik. 2014. β€œSparse Signal Recovery with Exponential-Family Noise.” In Compressed Sensing & Sparse Filtering, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 77–93. Signals and Communication Technology. Springer Berlin Heidelberg.
Rish, Irina, and Genady Ya Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Boca Raton, FL: CRC Press, Taylor & Francis Group.
Romberg, J. 2008. β€œImaging via Compressive Sampling.” IEEE Signal Processing Magazine 25 (2): 14–20.
Rosset, Saharon, and Ji Zhu. 2007. β€œPiecewise Linear Regularized Solution Paths.” The Annals of Statistics 35 (3): 1012–30.
Rubinstein, Ron, T. Peleg, and Michael Elad. 2013. β€œAnalysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model.” IEEE Transactions on Signal Processing 61 (3): 661–77.
Sarvotham, Shriram, Dror Baron, and Richard G. Baraniuk. 2006. β€œMeasurements Vs.Β Bits: Compressed Sensing Meets Information Theory.” In In Proc. Allerton Conf. On Comm., Control, and Computing.
Schniter, P., and S. Rangan. 2012. β€œCompressive Phase Retrieval via Generalized Approximate Message Passing.” In 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 815–22.
Shalev-Shwartz, Shai, and Ambuj Tewari. 2011. β€œStochastic Methods for L1-Regularized Loss Minimization.” Journal of Machine Learning Research 12 (July): 1865–92.
Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. β€œL1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework.” arXiv:1512.04011 [Cs], December.
Song, Ruiyang, Yao Xie, and Sebastian Pokutta. 2015. β€œSequential Information Guided Sensing.” arXiv:1509.00130 [Cs, Math, Stat], August.
Tropp, J. A., and S. J. Wright. 2010. β€œComputational Methods for Sparse Solution of Linear Inverse Problems.” Proceedings of the IEEE 98 (6): 948–58.
Tropp, J.A. 2006. β€œJust Relax: Convex Programming Methods for Identifying Sparse Signals in Noise.” IEEE Transactions on Information Theory 52 (3): 1030–51.
Vetterli, Martin. 1999. β€œWavelets: Approximation and Compression–a Review.” In AeroSense’99, 3723:28–31. International Society for Optics and Photonics.
Weidmann, Claudio, and Martin Vetterli. 2012. β€œRate Distortion Behavior of Sparse Sources.” IEEE Transactions on Information Theory 58 (8): 4969–92.
Wipf, David, and Srikantan Nagarajan. 2016. β€œIterative Reweighted L1 and L2 Methods for Finding Sparse Solution.” Microsoft Research, July.
Wu, R., W. Huang, and D. R. Chen. 2013. β€œThe Exact Support Recovery of Sparse Signals With Noise via Orthogonal Matching Pursuit.” IEEE Signal Processing Letters 20 (4): 403–6.
Wu, Yan, Mihaela Rosca, and Timothy Lillicrap. 2019. β€œDeep Compressed Sensing.” In International Conference on Machine Learning, 6850–60.
Yaghoobi, M., Sangnam Nam, R. Gribonval, and M.E. Davies. 2012. β€œNoise Aware Analysis Operator Learning for Approximately Cosparse Signals.” In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5409–12.
Yang, Wenzhuo, and Huan Xu. 2015. β€œStreaming Sparse Principal Component Analysis.” In Journal of Machine Learning Research, 494–503.
Zhang, Kai, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, and Jieping Ye. 2017. β€œRandomization or Condensation?: Linear-Cost Matrix Sketching Via Cascaded Compression Sampling.” In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 615–23. KDD ’17. New York, NY, USA: ACM.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.