Function approximation and interpolation



On constructing an approximation of some arbitrary function, and measuring the badness thereof.

THIS IS CHAOS RIGHT NOW. I need to break out the sampling/interpolation problem for regular data, for one thing.

Choosing the best approximation

In what sense? Most compact? Most easy to code?

If we are not interpolating, how much smoothing do we do?

We can use cross-validation, especially so-called β€œgeneralized” cross validation, to choose smoothing parameter this efficiently, in some sense.

Or you might have noisy data, in which case you now have a function approximation and inference problem, with error due to both approximation and sampling complexity. Compressive sensing has some finite-sample guarantees.

To discuss: loss functions.

An interesting problem is how you align the curves that are your objects of study; That is a problem of warping.

Polynomial spline smoothing of observations

The classic, and not just for functional data, but filed here because that’s where the action is now.

Special superpowers: Easy to differentiate and integrate.

Special weakness: many free parameters, not easy to do in high dimension.

Polynomial bases

See polynomial bases.

Fourier bases

πŸ—οΈ

Radial basis function approximation

I# Radial basis function approximation I actually care about this mostly for densities, so see mixture models, for what information I do have.

Rational approximation

Padé’s is the method I’ve heard of. Are there others? Easy to differentiate. OK to integrate if you cheat using a computational algebra package.

References

Anil, Cem, James Lucas, and Roger Grosse. 2018. β€œSorting Out Lipschitz Function Approximation,” November.
Barron, A. 1989. β€œMinimum Complexity Estimation.” In, 5_7–7. IEEE.
Barron, Andrew R., Albert Cohen, Wolfgang Dahmen, and Ronald A. DeVore. 2008. β€œApproximation and Learning by Greedy Algorithms.” The Annals of Statistics 36 (1): 64–94.
Barron, A.R. 1993. β€œUniversal Approximation Bounds for Superpositions of a Sigmoidal Function.” IEEE Transactions on Information Theory 39 (3): 930–45.
Berinde, Radu, and Piotr Indyk. 2009. β€œSequential Sparse Matching Pursuit.” In 2009 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2009, 36–43.
Beylkin, Gregory, and Lucas MonzΓ³n. 2010. β€œApproximation by Exponential Sums Revisited.” Applied and Computational Harmonic Analysis 28 (2): 131–49.
Blumensath, Thomas, and Mike E. Davies. 2008. β€œGradient Pursuit for Non-Linear Sparse Signal Modelling.” In Signal Processing Conference, 2008 16th European, 1–5. IEEE.
BΓΆlcskei, Helmut, Philipp Grohs, Gitta Kutyniok, and Philipp Petersen. 2019. β€œOptimal Approximation with Sparsely Connected Deep Neural Networks.” SIAM Journal on Mathematics of Data Science, February.
Boyd, John P. 2001. Chebyshev & Fourier Spectral Methods. Second Edition, Revised edition. Lecture Notes in Engineering. Berlin Heidelberg: Springer-Verlag.
Boyd, Nicholas, Trevor Hastie, Stephen Boyd, Benjamin Recht, and Michael Jordan. 2016. β€œSaturating Splines and Feature Selection.” arXiv:1609.06764 [Stat], September.
Carlson, R. E., and F. N. Fritsch. 1985. β€œMonotone Piecewise Bicubic Interpolation.” SIAM Journal on Numerical Analysis 22 (2): 386–400.
Cheney, Elliott Ward, and William Allan Light. 2009. A Course in Approximation Theory. American Mathematical Soc.
Cybenko, G. 1989. β€œApproximation by Superpositions of a Sigmoidal Function.” Mathematics of Control, Signals and Systems 2: 303–14.
Daniely, Amit. 2017. β€œDepth Separation for Neural Networks.” arXiv:1702.08489 [Cs, Stat], February.
Davies, Scott. n.d. β€œMultidimensional Triangulation and Interpolation for Reinforcement Learning,” 7.
Davis, Geoffrey M. 1998. β€œA Wavelet-Based Analysis of Fractal Image Compression.” IEEE Transactions on Image Processing 7 (2): 141–54.
Davis, G., S. Mallat, and M. Avellaneda. 1997. β€œAdaptive Greedy Approximations.” Constructive Approximation 13 (1): 57–98.
Dierckx, Paul. 1996. Curve and Surface Fitting Splines. Oxford: Clarendon Press.
Dougherty, Randall L., Alan S. Edelman, and James M. Hyman. 1989. β€œNonnegativity-, Monotonicity-, or Convexity-Preserving Cubic and Quintic Hermite Interpolation.” Mathematics of Computation 52 (186): 471–94.
Ekanadham, C., D. Tranchina, and E. P. Simoncelli. 2011. β€œRecovery of Sparse Translation-Invariant Signals With Continuous Basis Pursuit.” IEEE Transactions on Signal Processing 59 (10): 4735–44.
Fomel, Sergey. 2000. β€œInverse B-Spline Interpolation.” Citeseer.
Fritsch, F., and R. Carlson. 1980. β€œMonotone Piecewise Cubic Interpolation.” SIAM Journal on Numerical Analysis 17 (2): 238–46.
Goodwin, M M. 2001. β€œMultiscale Overlap-Add Sinusoidal Modeling Using Matching Pursuit and Refinements.” In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.
Goodwin, M M, and M Vetterli. 1999. β€œMatching Pursuit and Atomic Signal Models Based on Recursive Filter Banks.” IEEE Transactions on Signal Processing 47 (7): 1890–1902.
Goodwin, Michael. 1997. β€œMatching Pursuit with Damped Sinusoids.” In 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, 3:2037–40. Munich, Germany: IEEE.
Goodwin, M., and M. Vetterli. 1997. β€œAtomic Decompositions of Audio Signals.” In 1997 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, 1997.
Graps, A. 1995. β€œAn Introduction to Wavelets.” IEEE Computational Science Engineering 2 (2): 50–61.
Grohs, Philipp, Dmytro Perekrestenko, Dennis ElbrΓ€chter, and Helmut BΓΆlcskei. 2019. β€œDeep Neural Network Approximation Theory.” arXiv:1901.02220 [Cs, Math, Stat], January.
Higham, D. J. 1992. β€œMonotonic Piecewise Cubic Interpolation, with Applications to ODE Plotting.” Journal of Computational and Applied Mathematics 39 (3): 287–94.
Hornik, Kurt, Maxwell Stinchcombe, and Halbert White. 1989. β€œMultilayer Feedforward Networks Are Universal Approximators.” Neural Networks 2 (5): 359–66.
Hou, H.S., and H. Andrews. 1978. β€œCubic Splines for Image Interpolation and Digital Filtering.” IEEE Transactions on Acoustics, Speech and Signal Processing 26 (6): 508–17.
Huggins, P S, and S W Zucker. 2007. β€œGreedy Basis Pursuit.” IEEE Transactions on Signal Processing 55 (7): 3760–72.
Knudson, Karin C, Jacob Yates, Alexander Huk, and Jonathan W Pillow. 2014. β€œInferring Sparse Representations of Continuous Signals with Continuous Orthogonal Matching Pursuit.” In Advances in Neural Information Processing Systems 27, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 27:1215–23. Curran Associates, Inc.
Koehler, Frederic, Viraj Mehta, and Andrej Risteski. 2020. β€œRepresentational Aspects of Depth and Conditioning in Normalizing Flows.” arXiv:2010.01155 [Cs, Stat], October.
Koppel, Alec, Garrett Warnell, Ethan Stump, and Alejandro Ribeiro. 2016. β€œParsimonious Online Learning with Kernels via Sparse Projections in Function Space.” arXiv:1612.04111 [Cs, Stat], December.
Kronland-Martinet, R., Ph. Guillemain, and S. Ystad. 1997. β€œModelling of Natural Sounds by Time–Frequency and Wavelet Representations.” Organised Sound 2 (03): 179–91.
Lee, Holden, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora. 2017. β€œOn the Ability of Neural Nets to Express Distributions.” In arXiv:1702.07028 [Cs].
Lee, Wee Sun, Peter L. Bartlett, and Robert C. Williamson. 1996. β€œEfficient Agnostic Learning of Neural Networks with Bounded Fan-in.” IEEE Transactions on Information Theory 42 (6): 2118–32.
Orr, Mark JL. 1996. β€œIntroduction to Radial Basis Function Networks.” Technical Report, Center for Cognitive Science, University of Edinburgh.
Pati, Y. C., R. Rezaiifar, and P. S. Krishnaprasad. 1993. β€œOrthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet Decomposition.” In Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, 40–44 vol.1.
Pinkus, Allan. 1999. β€œApproximation Theory of the MLP Model in Neural Networks.” Acta Numerica 8 (January): 143–95.
Poggio, T., and F. Girosi. 1990. β€œNetworks for Approximation and Learning.” Proceedings of the IEEE 78 (9): 1481–97.
Poole, Ben, Subhaneil Lahiri, Maithreyi Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. 2016. β€œExponential Expressivity in Deep Neural Networks Through Transient Chaos.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3360–68. Curran Associates, Inc.
Ramsay, J. O. 1988. β€œMonotone Regression Splines in Action.” Statistical Science 3 (4): 425–41.
Rioul, O., and M. Vetterli. 1991. β€œWavelets and Signal Processing.” IEEE Signal Processing Magazine 8 (4): 14–38.
Rolnick, David, and Max Tegmark. 2017. β€œThe Power of Deeper Networks for Expressing Natural Functions.” arXiv:1705.05502 [Cs, Stat], May.
Rubinstein, Ron, A.M. Bruckstein, and Michael Elad. 2010. β€œDictionaries for Sparse Representation Modeling.” Proceedings of the IEEE 98 (6): 1045–57.
Song, Le, Santosh Vempala, John Wilmes, and Bo Xie. 2017. β€œOn the Complexity of Learning Neural Networks.” arXiv:1707.04615 [Cs], July.
Telgarsky, Matus. 2016. β€œBenefits of Depth in Neural Networks.” In arXiv:1602.04485 [Cs, Stat].
β€”β€”β€”. 2017. β€œNeural Networks and Rational Functions.” In PMLR, 3387–93.
Torrence, Christopher, and Gilbert P Compo. 1998. β€œA Practical Guide to Wavelet Analysis.” Bulletin of the American Meteorological Society 79 (1): 61–78.
Unser, M., A. Aldroubi, and M. Eden. 1993a. β€œB-Spline Signal Processing. I. Theory.” IEEE Transactions on Signal Processing 41 (2): 821–33.
β€”β€”β€”. 1993b. β€œB-Spline Signal Processing. II. Efficiency Design and Applications.” IEEE Transactions on Signal Processing 41 (2): 834–48.
Unser, Michael, Akram Aldroubi, and Murray Eden. 1991. β€œFast B-Spline Transforms for Continuous Image Representation and Interpolation.” IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (3): 277–85.
Vetterli, Martin. 1999. β€œWavelets: Approximation and Compression–a Review.” In AeroSense’99, 3723:28–31. International Society for Optics and Photonics.
Wang, Yu-Xiang, Alex Smola, and Ryan J. Tibshirani. 2014. β€œThe Falling Factorial Basis and Its Statistical Applications.” In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, 730–38. ICML’14. Beijing, China: JMLR.org.
Weidmann, Claudio, and Martin Vetterli. 2012. β€œRate Distortion Behavior of Sparse Sources.” IEEE Transactions on Information Theory 58 (8): 4969–92.
Weinert, H. L., and T. Kailath. 1974. β€œMinimum Energy Control Using Spline Functions.” In 1974 IEEE Conference on Decision and Control Including the 13th Symposium on Adaptive Processes, 169–72.
Wiatowski, Thomas, and Helmut BΓΆlcskei. 2015. β€œA Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction.” In Proceedings of IEEE International Symposium on Information Theory.
Wood, S. 1994. β€œMonotonic Smoothing Splines Fitted by Cross Validation.” SIAM Journal on Scientific Computing 15 (5): 1126–33.
Zeevi, Assaf J., and Ronny Meir. 1997. β€œDensity Estimation Through Convex Combinations of Densities: Approximation and Estimation Bounds.” Neural Networks: The Official Journal of the International Neural Network Society 10 (1): 99–109.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.