# Fast multipole methods

“Efficiently approximating fields made up of many decaying sources.”

Barnes-hut algorithms, fast Gauss transforms, generalized multipole methods.

Not something I intend to worry about right now, but I needed to clear these refs out of my overcrowded Mercer kernel approximation notebook. Fast multipole methods can also approximate certain Mercer kernels in the sense of rapidly approximately evaluating the field strength at given points, rather than approximating the kernels themselves with something simpler.

How do these methods comapre to/relate to H-matrices?

Overview on Vikas Rakyar’s thesis page: FAST SUMMATION ALGORITHMS:

The art of getting ‘good enough’ solutions ‘as fast as possible’.

Huge data sets containing millions of training examples with a large number of attributes (tall fat data) are relatively easy to gather. However one of the bottlenecks for successful inference of useful information from the data is the computational complexity of machine learning algorithms. Most state-of-the-art nonparametric machine learning algorithms have a computational complexity of either $$O(N^2)$$ or $$O(N^3)$$, where N is the number of training examples. This has seriously restricted the use of massive data sets. The bottleneck computational primitive at the heart of various algorithms is the multiplication of a structured matrix with a vector, which we refer to as matrix-vector product (MVP) primitive. The goal of my thesis is to speedup up these MVP primitives by fast approximate algorithms that scale as $$O(N)$$ and also provide high accuracy guarantees. I use ideas from computational physics, scientific computing, and computational geometry to design these algorithms. Currently the proposed algorithms have been applied in kernel density estimation, optimal bandwidth estimation, projection pursuit, Gaussian process regression, implicit surface fitting, and ranking.

## Implementation

figtree (C++, MATLAB, Python) does Gaussian fields in the inexact case.

## References

Barnes, Josh, and Piet Hut. 1986. Nature 324 (6096): 446–49.
Board, John, and Klaus Schulten. 2000. Computing in Science & Engineering 2 (1): 76–79.
Dongarra, Jack, and Francis Sullivan. 2000. Computing in Science & Engineering 2 (1): 22–23.
Greengard, L., and J. Strain. 1991. SIAM Journal on Scientific and Statistical Computing 12 (1): 79–94.
Lange, Henning, and J. Nathan Kutz. 2021. arXiv:2111.00110 [Cs], November.
Raykar, Vikas C., and Ramani Duraiswami. 2005.
Rokhlin, V. 1985. Journal of Computational Physics 60 (2): 187–207.
Schwab, Christoph, and Radu Alexandru Todor. 2006. Journal of Computational Physics, Uncertainty Quantification in Simulation Science, 217 (1): 100–122.
Simoncini, V., and D. Szyld. 2003. SIAM Journal on Scientific Computing 25 (2): 454–77.
Vorst, H.A. van der. n.d. Computing in Science & Engineering 2 (1): 32–37.
Yang, Changjiang, Ramani Duraiswami, and Larry S. Davis. 2004. In Advances in Neural Information Processing Systems, 1561–68.
Yang, Changjiang, Ramani Duraiswami, Nail A. Gumerov, and Larry Davis. 2003. In Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2, 464–64. ICCV ’03. Washington, DC, USA: IEEE Computer Society.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.