Abramovich, Felix, Yoav Benjamini, David L. Donoho, and Iain M. Johnstone. 2006. “Adapting to Unknown Sparsity by Controlling the False Discovery Rate.” The Annals of Statistics
34 (2): 584–653.
Aghasi, Alireza, Nam Nguyen, and Justin Romberg. 2016. “Net-Trim: A Layer-Wise Convex Pruning of Deep Neural Networks.” arXiv:1611.05162 [Cs, Stat]
Aragam, Bryon, Arash A. Amini, and Qing Zhou. 2015. “Learning Directed Acyclic Graphs with Penalized Neighbourhood Regression.” arXiv:1511.08963 [Cs, Math, Stat]
Azadkia, Mona, and Sourav Chatterjee. 2019. “A Simple Measure of Conditional Dependence.” arXiv:1910.12327 [Cs, Math, Stat]
Azizyan, Martin, Akshay Krishnamurthy, and Aarti Singh. 2015. “Extreme Compressive Sampling for Covariance Estimation.” arXiv:1506.00898 [Cs, Math, Stat]
Bach, Francis, Rodolphe Jenatton, and Julien Mairal. 2011. Optimization With Sparsity-Inducing Penalties
. Foundations and Trends(r) in Machine Learning 1.0. Now Publishers Inc.
Banerjee, Arindam, Sheng Chen, Farideh Fazayeli, and Vidyashankar Sivakumar. 2014. “Estimation with Norm Regularization.”
In Advances in Neural Information Processing Systems 27
, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 1556–64. Curran Associates, Inc.
Banerjee, Onureena, Laurent El Ghaoui, and Alexandre d’Aspremont. 2008. “Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.” Journal of Machine Learning Research
9 (Mar): 485–516.
Barber, Rina Foygel, and Emmanuel J. Candès. 2015. “Controlling the False Discovery Rate via Knockoffs.” The Annals of Statistics
43 (5): 2055–85.
Baron, Dror, Shriram Sarvotham, and Richard G. Baraniuk. 2010. “Bayesian Compressive Sensing via Belief Propagation.” IEEE Transactions on Signal Processing
58 (1): 269–80.
Barron, Andrew R., Albert Cohen, Wolfgang Dahmen, and Ronald A. DeVore. 2008. “Approximation and Learning by Greedy Algorithms.” The Annals of Statistics
36 (1): 64–94.
Barron, Andrew R., Cong Huang, Jonathan Q. Li, and Xi Luo. 2008. “MDL, Penalized Likelihood, and Statistical Risk.”
In Information Theory Workshop, 2008. ITW’08. IEEE
, 247–57. IEEE.
Bayati, M., and A. Montanari. 2012. “The LASSO Risk for Gaussian Matrices.” IEEE Transactions on Information Theory
58 (4): 1997–2017.
Bellec, Pierre C., and Alexandre B. Tsybakov. 2016. “Bounds on the Prediction Error of Penalized Least Squares Estimators with Convex Penalty.” arXiv:1609.06675 [Math, Stat]
Belloni, Alexandre, Victor Chernozhukov, and Lie Wang. 2011. “Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming.” Biometrika
98 (4): 791–806.
Berk, Richard, Lawrence Brown, Andreas Buja, Kai Zhang, and Linda Zhao. 2013. “Valid Post-Selection Inference.” The Annals of Statistics
41 (2): 802–37.
Bertin, K., E. Le Pennec, and V. Rivoirard. 2011. “Adaptive Dantzig Density Estimation.” Annales de l’Institut Henri Poincaré, Probabilités Et Statistiques
47 (1): 43–74.
Bien, Jacob, Irina Gaynanova, Johannes Lederer, and Christian L. Müller. 2018. “Non-Convex Global Minimization and False Discovery Rate Control for the TREX.” Journal of Computational and Graphical Statistics
27 (1): 23–33.
Bloniarz, Adam, Hanzhong Liu, Cun-Hui Zhang, Jasjeet Sekhon, and Bin Yu. 2015. “Lasso Adjustments of Treatment Effect Estimates in Randomized Experiments.” arXiv:1507.03652 [Math, Stat]
Bondell, Howard D., Arun Krishna, and Sujit K. Ghosh. 2010. “Joint Variable Selection for Fixed and Random Effects in Linear Mixed-Effects Models.” Biometrics
66 (4): 1069–77.
Bottou, Léon, Frank E. Curtis, and Jorge Nocedal. 2016. “Optimization Methods for Large-Scale Machine Learning.” arXiv:1606.04838 [Cs, Math, Stat]
Bruckstein, A. M., Michael Elad, and M. Zibulevsky. 2008. “On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations.” IEEE Transactions on Information Theory
54 (11): 4813–20.
Brunton, Steven L., Joshua L. Proctor, and J. Nathan Kutz. 2016. “Discovering Governing Equations from Data by Sparse Identification of Nonlinear Dynamical Systems.” Proceedings of the National Academy of Sciences
113 (15): 3932–37.
Bu, Yunqi, and Johannes Lederer. 2017. “Integrating Additional Knowledge Into Estimation of Graphical Models.” arXiv:1704.02739 [Stat]
Bühlmann, Peter, and Sara van de Geer. 2011. “Additive Models and Many Smooth Univariate Functions.”
In Statistics for High-Dimensional Data
, 77–97. Springer Series in Statistics. Springer Berlin Heidelberg.
Bunea, Florentina, Alexandre B. Tsybakov, and Marten H. Wegkamp. 2007a. “Sparse Density Estimation with ℓ1 Penalties.”
In Learning Theory
, edited by Nader H. Bshouty and Claudio Gentile, 530–43. Lecture Notes in Computer Science. Springer Berlin Heidelberg.
Bunea, Florentina, Alexandre Tsybakov, and Marten Wegkamp. 2007b. “Sparsity Oracle Inequalities for the Lasso.” Electronic Journal of Statistics
Candès, Emmanuel J., and Mark A. Davenport. 2011. “How Well Can We Estimate a Sparse Vector?” arXiv:1104.5246 [Cs, Math, Stat]
Candès, Emmanuel J., Yingying Fan, Lucas Janson, and Jinchi Lv. 2016. “Panning for Gold: Model-Free Knockoffs for High-Dimensional Controlled Variable Selection.” arXiv Preprint arXiv:1610.02351
Candès, Emmanuel J., and Carlos Fernandez-Granda. 2013. “Super-Resolution from Noisy Data.” Journal of Fourier Analysis and Applications
19 (6): 1229–54.
Candès, Emmanuel J., and Y. Plan. 2010. “Matrix Completion With Noise.” Proceedings of the IEEE
98 (6): 925–36.
Candès, Emmanuel J., Justin K. Romberg, and Terence Tao. 2006. “Stable Signal Recovery from Incomplete and Inaccurate Measurements.” Communications on Pure and Applied Mathematics
59 (8): 1207–23.
Candès, Emmanuel J., Michael B. Wakin, and Stephen P. Boyd. 2008. “Enhancing Sparsity by Reweighted ℓ 1 Minimization.” Journal of Fourier Analysis and Applications
14 (5-6): 877–905.
———. 2014. “Compressive System Identification.”
In Compressed Sensing & Sparse Filtering
, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 281–324. Signals and Communication Technology. Springer Berlin Heidelberg.
Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. “Sparse Signal Recovery Using Markov Random Fields.”
In Advances in Neural Information Processing Systems
, 257–64. Curran Associates, Inc.
Chartrand, R., and Wotao Yin. 2008. “Iteratively Reweighted Algorithms for Compressive Sensing.”
In IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008
Chatterjee, Sourav. 2020. “A New Coefficient of Correlation.” arXiv:1909.10140 [Math, Stat]
Chen, Minhua, J. Silva, J. Paisley, Chunping Wang, D. Dunson, and L. Carin. 2010. “Compressive Sensing on Manifolds Using a Nonparametric Mixture of Factor Analyzers: Algorithm and Performance Bounds.” IEEE Transactions on Signal Processing
58 (12): 6140–55.
Chen, Xiaojun. 2012. “Smoothing Methods for Nonsmooth, Nonconvex Minimization.” Mathematical Programming
134 (1): 71–99.
Chen, Y., and A. O. Hero. 2012. “Recursive ℓ1,∞ Group Lasso.” IEEE Transactions on Signal Processing
60 (8): 3978–87.
Chernozhukov, Victor, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. 2016. “Double/Debiased Machine Learning for Treatment and Causal Parameters.” arXiv:1608.00060 [Econ, Stat]
Chernozhukov, Victor, Christian Hansen, Yuan Liao, and Yinchu Zhu. 2018. “Inference For Heterogeneous Effects Using Low-Rank Estimations.” arXiv:1812.08089 [Math, Stat]
Chernozhukov, Victor, Whitney K. Newey, and Rahul Singh. 2018. “Learning L2 Continuous Regression Functionals via Regularized Riesz Representers.” arXiv:1809.05224 [Econ, Math, Stat]
Chetverikov, Denis, Zhipeng Liao, and Victor Chernozhukov. 2016. “On Cross-Validated Lasso.” arXiv:1605.02214 [Math, Stat]
Chichignoud, Michaël, Johannes Lederer, and Martin Wainwright. 2014. “A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality Guarantees.” arXiv:1410.0247 [Math, Stat]
Dai, Ran, and Rina Foygel Barber. 2016. “The Knockoff Filter for FDR Control in Group-Sparse and Multitask Regression.” arXiv Preprint arXiv:1602.03589
Diaconis, Persi, and David Freedman. 1984. “Asymptotics of Graphical Projection Pursuit.” The Annals of Statistics
12 (3): 793–815.
Dossal, Charles, Maher Kachour, Jalal M. Fadili, Gabriel Peyré, and Christophe Chesneau. 2011. “The Degrees of Freedom of the Lasso for General Design Matrix.” arXiv:1111.1162 [Cs, Math, Stat]
Efron, Bradley, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. 2004. “Least Angle Regression.” The Annals of Statistics
32 (2): 407–99.
El Karoui, Noureddine. 2008. “Operator Norm Consistent Estimation of Large Dimensional Sparse Covariance Matrices.” University of California, Berkeley
36 (6): 2717–56.
Elhamifar, E., and R. Vidal. 2013. “Sparse Subspace Clustering: Algorithm, Theory, and Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence
35 (11): 2765–81.
Engebretsen, Solveig, and Jon Bohlin. 2019. “Statistical Predictions with Glmnet.” Clinical Epigenetics
11 (1): 123.
Ewald, Karl, and Ulrike Schneider. 2015. “Confidence Sets Based on the Lasso Estimator.” arXiv:1507.05315 [Math, Stat]
Fan, Jianqing, and Runze Li. 2001. “Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association
96 (456): 1348–60.
Fan, Rong-En, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. “LIBLINEAR: A Library for Large Linear Classification.” Journal of Machine Learning Research 9: 1871–74.
Flynn, Cheryl J., Clifford M. Hurvich, and Jeffrey S. Simonoff. 2013. “Efficiency for Regularization Parameter Selection in Penalized Likelihood Estimation of Misspecified Models.” arXiv:1302.2068 [Stat]
Foygel, Rina, and Nathan Srebro. 2011. “Fast-Rate and Optimistic-Rate Error Bounds for L1-Regularized Regression.” arXiv:1108.0373 [Math, Stat]
Friedman, Jerome, Trevor Hastie, Holger Höfling, and Robert Tibshirani. 2007. “Pathwise Coordinate Optimization.” The Annals of Applied Statistics
1 (2): 302–32.
Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2008. “Sparse Inverse Covariance Estimation with the Graphical Lasso.” Biostatistics
9 (3): 432–41.
Gasso, G., A. Rakotomamonjy, and S. Canu. 2009. “Recovering Sparse Signals With a Certain Family of Nonconvex Penalties and DC Programming.” IEEE Transactions on Signal Processing
57 (12): 4686–98.
Geer, Sara A. van de. 2008. “High-Dimensional Generalized Linear Models and the Lasso.” The Annals of Statistics
36 (2): 614–45.
Geer, Sara A. van de, Peter Bühlmann, and Shuheng Zhou. 2011. “The Adaptive and the Thresholded Lasso for Potentially Misspecified Models (and a Lower Bound for the Lasso).” Electronic Journal of Statistics
Geer, Sara van de. 2007. “The Deterministic Lasso.”
———. 2014b. “Worst Possible Sub-Directions in High-Dimensional Models.”
In arXiv:1403.7023 [Math, Stat]
. Vol. 131.
———. 2014c. “Statistical Theory for High-Dimensional Models.” arXiv:1409.8557 [Math, Stat]
———. 2016. Estimation and Testing Under Sparsity
. Vol. 2159. Lecture Notes in Mathematics. Cham: Springer International Publishing.
Geer, Sara van de, Peter Bühlmann, Ya’acov Ritov, and Ruben Dezeure. 2014. “On Asymptotically Optimal Confidence Regions and Tests for High-Dimensional Models.” The Annals of Statistics
42 (3): 1166–1202.
Ghadimi, Saeed, and Guanghui Lan. 2013a. “Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming.” SIAM Journal on Optimization
23 (4): 2341–68.
Giryes, Raja, Guillermo Sapiro, and Alex M. Bronstein. 2014. “On the Stability of Deep Networks.” arXiv:1412.5896 [Cs, Math, Stat]
Greenhill, Catherine, Mikhail Isaev, Matthew Kwan, and Brendan D. McKay. 2016. “The Average Number of Spanning Trees in Sparse Graphs with Given Degrees.” arXiv:1606.01586 [Math]
Gupta, Pawan, and Marianna Pensky. 2016. “Solution of Linear Ill-Posed Problems Using Random Dictionaries.” arXiv:1605.07913 [Math, Stat]
Hall, Peter, Jiashun Jin, and Hugh Miller. 2014. “Feature Selection When There Are Many Influential Features.” Bernoulli
20 (3): 1647–71.
Hall, Peter, and Jing-Hao Xue. 2014. “On Selecting Interacting Features from High-Dimensional Data.” Computational Statistics & Data Analysis
71 (March): 694–708.
Hallac, David, Jure Leskovec, and Stephen Boyd. 2015. “Network Lasso: Clustering and Optimization in Large Graphs.” arXiv:1507.00280 [Cs, Math, Stat]
Hansen, Niels Richard, Patricia Reynaud-Bouret, and Vincent Rivoirard. 2015. “Lasso and Probabilistic Inequalities for Multivariate Point Processes.” Bernoulli
21 (1): 83–143.
Hastie, Trevor J., Tibshirani, Rob, and Martin J. Wainwright. 2015. Statistical Learning with Sparsity: The Lasso and Generalizations
. Boca Raton: Chapman and Hall/CRC.
Hawe, S., M. Kleinsteuber, and K. Diepold. 2013. “Analysis Operator Learning and Its Application to Image Reconstruction.” IEEE Transactions on Image Processing
22 (6): 2138–50.
He, Dan, Irina Rish, and Laxmi Parida. 2014. “Transductive HSIC Lasso.”
In Proceedings of the 2014 SIAM International Conference on Data Mining
, edited by Mohammed Zaki, Zoran Obradovic, Pang Ning Tan, Arindam Banerjee, Chandrika Kamath, and Srinivasan Parthasarathy, 154–62. Proceedings. Philadelphia, PA: Society for Industrial and Applied Mathematics.
Hebiri, Mohamed, and Sara A. van de Geer. 2011. “The Smooth-Lasso and Other ℓ1+ℓ2-Penalized Methods.” Electronic Journal of Statistics
Hegde, Chinmay, and Richard G. Baraniuk. 2012. “Signal Recovery on Incoherent Manifolds.” IEEE Transactions on Information Theory
58 (12): 7204–14.
Hegde, Chinmay, Piotr Indyk, and Ludwig Schmidt. 2015. “A Nearly-Linear Time Framework for Graph-Structured Sparsity.”
In Proceedings of the 32nd International Conference on Machine Learning (ICML-15)
Hesterberg, Tim, Nam Hee Choi, Lukas Meier, and Chris Fraley. 2008. “Least Angle and ℓ1 Penalized Regression: A Review.” Statistics Surveys
Hormati, A., O. Roy, Y.M. Lu, and M. Vetterli. 2010. “Distributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications.” IEEE Transactions on Signal Processing
58 (3): 1095–1109.
Hsieh, Cho-Jui, Mátyás A. Sustik, Inderjit S. Dhillon, and Pradeep D. Ravikumar. 2014. “QUIC: Quadratic Approximation for Sparse Inverse Covariance Estimation.” Journal of Machine Learning Research
15 (1): 2911–47.
Hu, Tao, Cengiz Pehlevan, and Dmitri B. Chklovskii. 2014. “A Hebbian/Anti-Hebbian Network for Online Sparse Dictionary Learning Derived from Symmetric Matrix Factorization.”
In 2014 48th Asilomar Conference on Signals, Systems and Computers
Ishwaran, Hemant, and J. Sunil Rao. 2005. “Spike and Slab Variable Selection: Frequentist and Bayesian Strategies.” The Annals of Statistics
33 (2): 730–73.
Janková, Jana, and Sara van de Geer. 2016. “Confidence Regions for High-Dimensional Generalized Linear Models Under Sparsity.” arXiv:1610.01353 [Math, Stat]
Janson, Lucas, William Fithian, and Trevor J. Hastie. 2015. “Effective Degrees of Freedom: A Flawed Metaphor.” Biometrika
102 (2): 479–85.
Javanmard, Adel, and Andrea Montanari. 2014. “Confidence Intervals and Hypothesis Testing for High-Dimensional Regression.” Journal of Machine Learning Research
15 (1): 2869–909.
Jung, Alexander. 2013. “An RKHS Approach to Estimation with Sparsity Constraints.”
In Advances in Neural Information Processing Systems 29
Kabán, Ata. 2014. “New Bounds on Compressive Linear Least Squares Regression.”
In Journal of Machine Learning Research
Kato, Kengo. 2009. “On the Degrees of Freedom in Shrinkage Estimation.” Journal of Multivariate Analysis
100 (7): 1338–52.
Kim, Yongdai, Sunghoon Kwon, and Hosik Choi. 2012. “Consistent Model Selection Criteria on High Dimensions.” Journal of Machine Learning Research
13 (Apr): 1037–57.
Koltchinskii, Prof Vladimir. 2011. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems
. Lecture Notes in Mathematics École d’Été de Probabilités de Saint-Flour 2033. Heidelberg: Springer.
Koppel, Alec, Garrett Warnell, Ethan Stump, and Alejandro Ribeiro. 2016. “Parsimonious Online Learning with Kernels via Sparse Projections in Function Space.” arXiv:1612.04111 [Cs, Stat]
Kowalski, Matthieu, and Bruno Torrésani. 2009. “Structured Sparsity: From Mixed Norms to Structured Shrinkage.”
In SPARS’09-Signal Processing with Adaptive Sparse Structured Representations
Krämer, Nicole, Juliane Schäfer, and Anne-Laure Boulesteix. 2009. “Regularized Estimation of Large-Scale Gene Association Networks Using Graphical Gaussian Models.” BMC Bioinformatics
10 (1): 384.
Lam, Clifford, and Jianqing Fan. 2009. “Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.” Annals of Statistics
37 (6B): 4254–78.
Lambert-Lacroix, Sophie, and Laurent Zwald. 2011. “Robust Regression Through the Huber’s Criterion and Adaptive Lasso Penalty.” Electronic Journal of Statistics
Langford, John, Lihong Li, and Tong Zhang. 2009. “Sparse Online Learning via Truncated Gradient.”
In Advances in Neural Information Processing Systems 21
, edited by D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, 905–12. Curran Associates, Inc.
Lederer, Johannes, and Michael Vogt. 2020. “Estimating the Lasso’s Effective Noise.” arXiv:2004.11554 [Stat]
Lee, Jason D., Dennis L. Sun, Yuekai Sun, and Jonathan E. Taylor. 2013. “Exact Post-Selection Inference, with Application to the Lasso.” arXiv:1311.6238 [Math, Stat]
Lemhadri, Ismael, Feng Ruan, Louis Abraham, and Robert Tibshirani. 2021. “LassoNet: A Neural Network with Feature Sparsity.” Journal of Machine Learning Research
22 (127): 1–29.
Li, Wei, and Johannes Lederer. 2019. “Tuning Parameter Calibration for ℓ1-Regularized Logistic Regression.” Journal of Statistical Planning and Inference
202 (September): 80–98.
Lim, Néhémy, and Johannes Lederer. 2016. “Efficient Feature Selection With Large and High-Dimensional Data.” arXiv:1609.07195 [Stat]
Lockhart, Richard, Jonathan Taylor, Ryan J. Tibshirani, and Robert Tibshirani. 2014. “A Significance Test for the Lasso.” The Annals of Statistics
42 (2): 413–68.
Lu, W., Y. Goldberg, and J. P. Fine. 2012. “On the Robustness of the Adaptive Lasso to Model Misspecification.” Biometrika
99 (3): 717–31.
Lundberg, Scott M, and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.”
In Advances in Neural Information Processing Systems
. Vol. 30. Curran Associates, Inc.
Mahoney, Michael W. 2016. “Lecture Notes on Spectral Graph Methods.” arXiv Preprint arXiv:1608.04845
Mazumder, Rahul, Jerome H Friedman, and Trevor J. Hastie. 2009. “SparseNet: Coordinate Descent with Non-Convex Penalties.”
Meier, Lukas, Sara van de Geer, and Peter Bühlmann. 2008. “The Group Lasso for Logistic Regression.” Group
70 (Part 1): 53–71.
Meinshausen, Nicolai, and Peter Bühlmann. 2006. “High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics
34 (3): 1436–62.
Meinshausen, Nicolai, and Bin Yu. 2009. “Lasso-Type Recovery of Sparse Representations for High-Dimensional Data.” The Annals of Statistics
37 (1): 246–70.
Molchanov, Dmitry, Arsenii Ashukha, and Dmitry Vetrov. 2017. “Variational Dropout Sparsifies Deep Neural Networks.”
In Proceedings of ICML
Montanari, Andrea. 2012. “Graphical Models Concepts in Compressed Sensing.” Compressed Sensing: Theory and Applications
Naik, Prasad A., and Chih-Ling Tsai. 2001. “Single‐index Model Selections.” Biometrika
88 (3): 821–32.
Nam, Sangnam, and R. Gribonval. 2012. “Physics-Driven Structured Cosparse Modeling for Source Localization.”
In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Needell, D., and J. A. Tropp. 2008. “CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples.” arXiv:0803.2392 [Cs, Math]
Nesterov, Yu. 2012. “Gradient Methods for Minimizing Composite Functions.” Mathematical Programming
140 (1): 125–61.
Neville, Sarah E., John T. Ormerod, and M. P. Wand. 2014. “Mean Field Variational Bayes for Continuous Sparse Signal Shrinkage: Pitfalls and Remedies.” Electronic Journal of Statistics
8 (1): 1113–51.
Ngiam, Jiquan, Zhenghao Chen, Sonia A. Bhaskar, Pang W. Koh, and Andrew Y. Ng. 2011. “Sparse Filtering.”
In Advances in Neural Information Processing Systems 24
, edited by J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, 1125–33. Curran Associates, Inc.
Nickl, Richard, and Sara van de Geer. 2013. “Confidence Sets in Sparse Regression.” The Annals of Statistics
41 (6): 2852–76.
Oymak, S., A. Jalali, M. Fazel, and B. Hassibi. 2013. “Noisy Estimation of Simultaneously Structured Models: Limitations of Convex Relaxation.”
In 2013 IEEE 52nd Annual Conference on Decision and Control (CDC)
Peleg, Tomer, Yonina C. Eldar, and Michael Elad. 2010. “Exploiting Statistical Dependencies in Sparse Representations for Signal Recovery.” IEEE Transactions on Signal Processing
60 (5): 2286–2303.
Pouget-Abadie, Jean, and Thibaut Horel. 2015. “Inferring Graphs from Cascades: A Sparse Recovery Framework.”
In Proceedings of The 32nd International Conference on Machine Learning
Pourahmadi, Mohsen. 2011. “Covariance Estimation: The GLM and Regularization Perspectives.” Statistical Science
26 (3): 369–87.
Qian, Wei, and Yuhong Yang. 2012. “Model Selection via Standard Error Adjusted Adaptive Lasso.” Annals of the Institute of Statistical Mathematics
65 (2): 295–318.
Qin, Zhiwei, Katya Scheinberg, and Donald Goldfarb. 2013. “Efficient Block-Coordinate Descent Algorithms for the Group Lasso.” Mathematical Programming Computation
5 (2): 143–69.
Rahimi, Ali, and Benjamin Recht. 2009. “Weighted Sums of Random Kitchen Sinks: Replacing Minimization with Randomization in Learning.”
In Advances in Neural Information Processing Systems
, 1313–20. Curran Associates, Inc.
Ravikumar, Pradeep, Martin J. Wainwright, Garvesh Raskutti, and Bin Yu. 2011. “High-Dimensional Covariance Estimation by Minimizing ℓ1-Penalized Log-Determinant Divergence.” Electronic Journal of Statistics
Ravishankar, S., and Y. Bresler. 2015. “Sparsifying Transform Learning With Efficient Optimal Updates and Convergence Guarantees.” IEEE Transactions on Signal Processing
63 (9): 2389–2404.
Reynaud-Bouret, Patricia, and Sophie Schbath. 2010. “Adaptive Estimation for Hawkes Processes; Application to Genome Analysis.” The Annals of Statistics
38 (5): 2781–2822.
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.”
In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
, 1135–44. KDD ’16. New York, NY, USA: ACM.
Rish, Irina, and Genady Grabarnik. 2014. “Sparse Signal Recovery with Exponential-Family Noise.”
In Compressed Sensing & Sparse Filtering
, edited by Avishy Y. Carmi, Lyudmila Mihaylova, and Simon J. Godsill, 77–93. Signals and Communication Technology. Springer Berlin Heidelberg.
Rish, Irina, and Genady Ya Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series. Boca Raton, FL: CRC Press, Taylor & Francis Group.
Ročková, Veronika, and Edward I. George. 2018. “The Spike-and-Slab LASSO.” Journal of the American Statistical Association
113 (521): 431–44.
Sashank J. Reddi, Suvrit Sra, Barnabás Póczós, and Alex Smola. 1995. “Stochastic Frank-Wolfe Methods for Nonconvex Optimization.”
Schelldorfer, Jürg, Peter Bühlmann, and Sara Van De Geer. 2011. “Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization.” Scandinavian Journal of Statistics
38 (2): 197–214.
Shen, Xiaotong, and Hsin-Cheng Huang. 2006. “Optimal Model Assessment, Selection, and Combination.” Journal of the American Statistical Association
101 (474): 554–68.
Shen, Xiaotong, Hsin-Cheng Huang, and Jimmy Ye. 2004. “Adaptive Model Selection and Assessment for Exponential Family Distributions.” Technometrics
46 (3): 306–17.
Shen, Xiaotong, and Jianming Ye. 2002. “Adaptive Model Selection.” Journal of the American Statistical Association
97 (457): 210–21.
Simon, Noah, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2011. “Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent.” Journal of Statistical Software
Smith, Virginia, Simone Forte, Michael I. Jordan, and Martin Jaggi. 2015. “L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework.” arXiv:1512.04011 [Cs]
Soh, Yong Sheng, and Venkat Chandrasekaran. 2017. “A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers.” arXiv:1701.01207 [Cs, Math, Stat]
Soltani, Mohammadreza, and Chinmay Hegde. 2016. “Demixing Sparse Signals from Nonlinear Observations.” Statistics
Starck, J. L., Michael Elad, and David L. Donoho. 2005. “Image Decomposition via the Combination of Sparse Representations and a Variational Approach.” IEEE Transactions on Image Processing
14 (10): 1570–82.
Stine, Robert A. 2004. “Discussion of ‘Least Angle Regression’ by Efron Et Al.” The Annals of Statistics
32 (2): 407–99.
Su, Weijie, Malgorzata Bogdan, and Emmanuel J. Candès. 2015. “False Discoveries Occur Early on the Lasso Path.” arXiv:1511.01957 [Cs, Math, Stat]
Taddy, Matt. 2013. “One-Step Estimator Paths for Concave Regularization.” arXiv:1308.5623 [Stat]
Tarr, Garth, Samuel Müller, and Alan H. Welsh. 2018. “Mplot: An R Package for Graphical Model Stability and Variable Selection Procedures.” Journal of Statistical Software
83 (1): 1–28.
Thrampoulidis, Chrtistos, Ehsan Abbasi, and Babak Hassibi. 2015. “LASSO with Non-Linear Measurements Is Equivalent to One With Linear Measurements.”
In Advances in Neural Information Processing Systems 28
, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, R. Garnett, and R. Garnett, 3402–10. Curran Associates, Inc.
Tibshirani, Robert. 1996. “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B (Methodological)
58 (1): 267–88.
———. 2011. “Regression Shrinkage and Selection via the Lasso: A Retrospective.” Journal of the Royal Statistical Society: Series B (Statistical Methodology)
73 (3): 273–82.
Tibshirani, Ryan J. 2014. “A General Framework for Fast Stagewise Algorithms.” arXiv:1408.5801 [Stat]
Trofimov, Ilya, and Alexander Genkin. 2015. “Distributed Coordinate Descent for L1-Regularized Logistic Regression.”
In Analysis of Images, Social Networks and Texts
, edited by Mikhail Yu Khachay, Natalia Konstantinova, Alexander Panchenko, Dmitry I. Ignatov, and Valeri G. Labunets, 243–54. Communications in Computer and Information Science 542. Springer International Publishing.
Tropp, J. A., and S. J. Wright. 2010. “Computational Methods for Sparse Solution of Linear Inverse Problems.” Proceedings of the IEEE
98 (6): 948–58.
Tschannen, Michael, and Helmut Bölcskei. 2016. “Noisy Subspace Clustering via Matching Pursuits.” arXiv:1612.03450 [Cs, Math, Stat]
Unser, Michael A., and Pouya Tafti. 2014. An Introduction to Sparse Stochastic Processes
. New York: Cambridge University Press.
Unser, M., P. D. Tafti, A. Amini, and H. Kirshner. 2014. “A Unified Formulation of Gaussian Vs Sparse Stochastic Processes - Part II: Discrete-Domain Theory.” IEEE Transactions on Information Theory
60 (5): 3036–51.
Unser, M., P. D. Tafti, and Q. Sun. 2014. “A Unified Formulation of Gaussian Vs Sparse Stochastic Processes—Part I: Continuous-Domain Theory.” IEEE Transactions on Information Theory
60 (3): 1945–62.
Veitch, Victor, and Daniel M. Roy. 2015. “The Class of Random Graphs Arising from Exchangeable Random Measures.” arXiv:1512.03099 [Cs, Math, Stat]
Wahba, Grace. 1990. Spline Models for Observational Data. SIAM.
Wang, Hansheng, Guodong Li, and Guohua Jiang. 2007. “Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso.” Journal of Business & Economic Statistics
25 (3): 347–55.
Wang, L., M. D. Gordon, and J. Zhu. 2006. “Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning.”
In Sixth International Conference on Data Mining (ICDM’06)
Wang, Zhangyang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui Shi, and Thomas S. Huang. 2016. “Stacked Approximated Regression Machine: A Simple Deep Learning Approach.”
Wisdom, Scott, Thomas Powers, James Pitton, and Les Atlas. 2016. “Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery.”
In Advances in Neural Information Processing Systems 29
Woodworth, Joseph, and Rick Chartrand. 2015. “Compressed Sensing Recovery via Nonconvex Shrinkage Penalties.” arXiv:1504.02923 [Cs, Math]
Wright, S. J., R. D. Nowak, and M. A. T. Figueiredo. 2009. “Sparse Reconstruction by Separable Approximation.” IEEE Transactions on Signal Processing
57 (7): 2479–93.
Wu, Tong Tong, and Kenneth Lange. 2008. “Coordinate Descent Algorithms for Lasso Penalized Regression.” The Annals of Applied Statistics
2 (1): 224–44.
Xu, H., C. Caramanis, and S. Mannor. 2010. “Robust Regression and Lasso.” IEEE Transactions on Information Theory
56 (7): 3561–74.
———. 2012. “Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem.” IEEE Transactions on Pattern Analysis and Machine Intelligence
34 (1): 187–93.
Yaghoobi, M., Sangnam Nam, R. Gribonval, and M.E. Davies. 2012. “Noise Aware Analysis Operator Learning for Approximately Cosparse Signals.”
In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Yoshida, Ryo, and Mike West. 2010. “Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing.” Journal of Machine Learning Research
11 (May): 1771–98.
Yuan, Ming, and Yi Lin. 2006. “Model Selection and Estimation in Regression with Grouped Variables.” Journal of the Royal Statistical Society: Series B (Statistical Methodology)
68 (1): 49–67.
Yun, Sangwoon, and Kim-Chuan Toh. 2009. “A Coordinate Gradient Descent Method for ℓ 1-Regularized Convex Minimization.” Computational Optimization and Applications
48 (2): 273–307.
Zhang, Cun-Hui. 2010. “Nearly Unbiased Variable Selection Under Minimax Concave Penalty.” The Annals of Statistics
38 (2): 894–942.
Zhang, Cun-Hui, and Stephanie S. Zhang. 2014. “Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology)
76 (1): 217–42.
Zhang, Yiyun, Runze Li, and Chih-Ling Tsai. 2010. “Regularization Parameter Selections via Generalized Information Criterion.” Journal of the American Statistical Association
105 (489): 312–23.
Zhao, Peng, and Bin Yu. 2006. “On Model Selection Consistency of Lasso.” Journal of Machine Learning Research
7 (Nov): 2541–63.
Zhao, Tuo, Han Liu, and Tong Zhang. 2018. “Pathwise Coordinate Optimization for Sparse Learning: Algorithm and Theory.” The Annals of Statistics
46 (1): 180–218.
Zhou, Tianyi, Dacheng Tao, and Xindong Wu. 2011. “Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction.” Data Mining and Knowledge Discovery
22 (3): 340–71.
Zou, Hui. 2006. “The Adaptive Lasso and Its Oracle Properties.” Journal of the American Statistical Association
101 (476): 1418–29.
Zou, Hui, and Trevor Hastie. 2005. “Regularization and Variable Selection via the Elastic Net.” Journal of the Royal Statistical Society: Series B (Statistical Methodology)
67 (2): 301–20.
Zou, Hui, Trevor Hastie, and Robert Tibshirani. 2007. “On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics
35 (5): 2173–92.
Zou, Hui, and Runze Li. 2008. “One-Step Sparse Estimates in Nonconcave Penalized Likelihood Models.” The Annals of Statistics
36 (4): 1509–33.
No comments yet. Why not leave one?