Function approximation and interpolation

On constructing an approximation of some arbitrary function, and measuring the badness thereof.

THIS IS CHAOS RIGHT NOW. I need to break out the sampling/interpolation problem for regular data, for one thing.

Choosing the best approximation

In what sense? Most compact? Most easy to code?

If we are not interpolating, how much smoothing do we do?

We can use cross-validation, especially so-called “generalized” cross validation, to choose smoothing parameter this efficiently, in some sense.

Or you might have noisy data, in which case you now have a function approximation and inference problem, with error due to both approximation and sampling complexity. Compressive sensing has some finite-sample guarantees.

To discuss: loss functions.

An interesting problem here is how you align the curves that are your objects of study; That is a problem of warping.

Polynomial spline smoothing of observations

The classic, and not just for functional data, but filed here because that’s where the action is now.

Special superpowers: Easy to differentiate and integrate.

Special weakness: many free parameters, not easy to do in high dimension.

Polynomial bases

See polynomial bases.

Fourier bases


Radial basis function approximation

I# Radial basis function approximation I actually care about this mostly for densities, so see mixture models, for what information I do have.

Rational approximation

Padé’s is the method I’ve heard of. Are there others? Easy to differentiate. OK to integrate if you cheat using a computational algebra package.


Anil, Cem, James Lucas, and Roger Grosse. 2018. “Sorting Out Lipschitz Function Approximation,” November.
Barron, A. 1989. “Minimum Complexity Estimation.” In, 5_7–7. IEEE.
Barron, A.R. 1993. “Universal Approximation Bounds for Superpositions of a Sigmoidal Function.” IEEE Transactions on Information Theory 39 (3): 930–45.
Barron, Andrew R., Albert Cohen, Wolfgang Dahmen, and Ronald A. DeVore. 2008. “Approximation and Learning by Greedy Algorithms.” The Annals of Statistics 36 (1): 64–94.
Berinde, Radu, and Piotr Indyk. 2009. “Sequential Sparse Matching Pursuit.” In 2009 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2009, 36–43.
Beylkin, Gregory, and Lucas Monzón. 2010. “Approximation by Exponential Sums Revisited.” Applied and Computational Harmonic Analysis 28 (2): 131–49.
Blumensath, Thomas, and Mike E. Davies. 2008. “Gradient Pursuit for Non-Linear Sparse Signal Modelling.” In Signal Processing Conference, 2008 16th European, 1–5. IEEE.
———. 2008. “Gradient Pursuit for Non-Linear Sparse Signal Modelling.” In Signal Processing Conference, 2008 16th European, 1–5. IEEE.
Boyd, John P. 2001. Chebyshev & Fourier Spectral Methods. Second Edition, Revised edition. Lecture Notes in Engineering. Berlin Heidelberg: Springer-Verlag.
Boyd, Nicholas, Trevor Hastie, Stephen Boyd, Benjamin Recht, and Michael Jordan. 2016. “Saturating Splines and Feature Selection.” arXiv:1609.06764 [stat], September.
Bölcskei, Helmut, Philipp Grohs, Gitta Kutyniok, and Philipp Petersen. 2019. “Optimal Approximation with Sparsely Connected Deep Neural Networks.” SIAM Journal on Mathematics of Data Science, February.
Carlson, R. E., and F. N. Fritsch. 1985. “Monotone Piecewise Bicubic Interpolation.” SIAM Journal on Numerical Analysis 22 (2): 386–400.
Cheney, Elliott Ward, and William Allan Light. 2009. A Course in Approximation Theory. American Mathematical Soc.
Cybenko, G. 1989. “Approximation by Superpositions of a Sigmoidal Function.” Mathematics of Control, Signals and Systems 2: 303–14.
Daniely, Amit. 2017. “Depth Separation for Neural Networks.” arXiv:1702.08489 [cs, Stat], February.
Davis, G., S. Mallat, and M. Avellaneda. 1997. “Adaptive Greedy Approximations.” Constructive Approximation 13 (1): 57–98.
Davis, Geoffrey M. 1998. “A Wavelet-Based Analysis of Fractal Image Compression.” IEEE Transactions on Image Processing 7 (2): 141–54.
Dierckx, Paul. 1996. Curve and Surface Fitting Splines. Oxford: Clarendon Press.
Dougherty, Randall L., Alan S. Edelman, and James M. Hyman. 1989. “Nonnegativity-, Monotonicity-, or Convexity-Preserving Cubic and Quintic Hermite Interpolation.” Mathematics of Computation 52 (186): 471–94.
Ekanadham, C., D. Tranchina, and E. P. Simoncelli. 2011. “Recovery of Sparse Translation-Invariant Signals With Continuous Basis Pursuit.” IEEE Transactions on Signal Processing 59 (10): 4735–44.
Fomel, Sergey. 2000. “Inverse B-Spline Interpolation.” Citeseer.
Fritsch, F., and R. Carlson. 1980. “Monotone Piecewise Cubic Interpolation.” SIAM Journal on Numerical Analysis 17 (2): 238–46.
Goodwin, M M. 2001. “Multiscale Overlap-Add Sinusoidal Modeling Using Matching Pursuit and Refinements.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.
Goodwin, M M, and M Vetterli. 1999. “Matching Pursuit and Atomic Signal Models Based on Recursive Filter Banks.” IEEE Transactions on Signal Processing 47 (7): 1890–1902.
Goodwin, M., and M. Vetterli. 1997. “Atomic Decompositions of Audio Signals.” In 1997 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, 1997.
Goodwin, Michael. 1997. “Matching Pursuit with Damped Sinusoids.” In 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, 3:2037–40. Munich, Germany: IEEE.
Graps, A. 1995. “An Introduction to Wavelets.” IEEE Computational Science Engineering 2 (2): 50–61.
Grohs, Philipp, Dmytro Perekrestenko, Dennis Elbrächter, and Helmut Bölcskei. 2019. “Deep Neural Network Approximation Theory.” arXiv:1901.02220 [cs, Math, Stat], January.
Higham, D. J. 1992. “Monotonic Piecewise Cubic Interpolation, with Applications to ODE Plotting.” Journal of Computational and Applied Mathematics 39 (3): 287–94.
Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. 1989. “Multilayer Feedforward Networks Are Universal Approximators.” Neural Networks 2 (5): 359–66.
Hou, H.S., and H. Andrews. 1978. “Cubic Splines for Image Interpolation and Digital Filtering.” IEEE Transactions on Acoustics, Speech and Signal Processing 26 (6): 508–17.
Huggins, P S, and S W Zucker. 2007. “Greedy Basis Pursuit.” IEEE Transactions on Signal Processing 55 (7): 3760–72.
Knudson, Karin C, Jacob Yates, Alexander Huk, and Jonathan W Pillow. 2014. “Inferring Sparse Representations of Continuous Signals with Continuous Orthogonal Matching Pursuit.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 27:1215–23. Curran Associates, Inc.
Koehler, Frederic, Viraj Mehta, and Andrej Risteski. 2020. “Representational Aspects of Depth and Conditioning in Normalizing Flows.” arXiv:2010.01155 [cs, Stat], October.
Koppel, Alec, Garrett Warnell, Ethan Stump, and Alejandro Ribeiro. 2016. “Parsimonious Online Learning with Kernels via Sparse Projections in Function Space.” arXiv:1612.04111 [cs, Stat], December.
Kronland-Martinet, R., Ph. Guillemain, and S. Ystad. 1997. “Modelling of Natural Sounds by Time–Frequency and Wavelet Representations.” Organised Sound 2 (03): 179–91.
Lee, Holden, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora. 2017. “On the Ability of Neural Nets to Express Distributions.” In arXiv:1702.07028 [cs].
Lee, Wee Sun, Peter L. Bartlett, and Robert C. Williamson. 1996. “Efficient Agnostic Learning of Neural Networks with Bounded Fan-in.” IEEE Transactions on Information Theory 42 (6): 2118–32.
Orr, Mark JL. 1996. “Introduction to Radial Basis Function Networks.” Technical Report, Center for Cognitive Science, University of Edinburgh.
Pati, Y. C., R. Rezaiifar, and P. S. Krishnaprasad. 1993. “Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition.” In Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, 40–44 vol.1.
Pinkus, Allan. 1999. “Approximation Theory of the MLP Model in Neural Networks.” Acta Numerica 8 (January): 143–95.
Poggio, T., and F. Girosi. 1990. “Networks for Approximation and Learning.” Proceedings of the IEEE 78 (9): 1481–97.
Poole, Ben, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. 2016. “Exponential Expressivity in Deep Neural Networks Through Transient Chaos.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3360–68. Curran Associates, Inc.
Ramsay, J. O. 1988. “Monotone Regression Splines in Action.” Statistical Science 3 (4): 425–41.
Rioul, O., and M. Vetterli. 1991. “Wavelets and Signal Processing.” IEEE Signal Processing Magazine 8 (4): 14–38.
Rolnick, David, and Max Tegmark. 2017. “The Power of Deeper Networks for Expressing Natural Functions.” arXiv:1705.05502 [cs, Stat], May.
Rubinstein, Ron, A.M. Bruckstein, and Michael Elad. 2010. “Dictionaries for Sparse Representation Modeling.” Proceedings of the IEEE 98 (6): 1045–57.
Song, Le, Santosh Vempala, John Wilmes, and Bo Xie. 2017. “On the Complexity of Learning Neural Networks.” arXiv:1707.04615 [cs], July.
Telgarsky, Matus. 2016. “Benefits of Depth in Neural Networks.” In arXiv:1602.04485 [cs, Stat].
———. 2017. “Neural Networks and Rational Functions.” In PMLR, 3387–93.
Torrence, Christopher, and Gilbert P Compo. 1998. “A Practical Guide to Wavelet Analysis.” Bulletin of the American Meteorological Society 79 (1): 61–78.
Unser, M., A. Aldroubi, and M. Eden. 1993a. “B-Spline Signal Processing. I. Theory.” IEEE Transactions on Signal Processing 41 (2): 821–33.
———. 1993b. “B-Spline Signal Processing. II. Efficiency Design and Applications.” IEEE Transactions on Signal Processing 41 (2): 834–48.
Unser, Michael, Akram Aldroubi, and Murray Eden. 1991. “Fast B-Spline Transforms for Continuous Image Representation and Interpolation.” IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (3): 277–85.
Vetterli, Martin. 1999. “Wavelets: Approximation and Compression–a Review.” In AeroSense’99, 3723:28–31. International Society for Optics and Photonics.
Wang, Yu-Xiang, Alex Smola, and Ryan J. Tibshirani. 2014. “The Falling Factorial Basis and Its Statistical Applications.” In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, 730–38. ICML’14. Beijing, China:
Weidmann, Claudio, and Martin Vetterli. 2012. “Rate Distortion Behavior of Sparse Sources.” IEEE Transactions on Information Theory 58 (8): 4969–92.
Weinert, H. L., and T. Kailath. 1974. “Minimum Energy Control Using Spline Functions.” In 1974 IEEE Conference on Decision and Control Including the 13th Symposium on Adaptive Processes, 169–72.
Wiatowski, Thomas, and Helmut Bölcskei. 2015. “A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.” In Proceedings of IEEE International Symposium on Information Theory.
Wood, S. 1994. “Monotonic Smoothing Splines Fitted by Cross Validation.” SIAM Journal on Scientific Computing 15 (5): 1126–33.
Zeevi, Assaf J., and Ronny Meir. 1997. “Density Estimation Through Convex Combinations of Densities: Approximation and Estimation Bounds.” Neural Networks: The Official Journal of the International Neural Network Society 10 (1): 99–109.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.