I wish, for a project of my own, to know about how to deconvolve with

- High dimensional data
- irregularly sampled data
- inhomogenous (although known) convolution kernels

This is in a signal processing setting; for the (closely-related) kernel-density estimation in a statistical setting, see kernel approximation. If you don’t know your noise spectrum, see blind deconvolution.

## Vanilla deconvolution

Wiener filtering!
Deconvolving a signal convolved with a known kernel.
Say, reconstructing the pure sound of an instrument,
or, the sound of the echo in a church,
from a recording made in a reverberant church.
It’s not purely acoustic, though; applies to images, abstract wacky function spaces etc.
The procedure is to presume your signal has been blurred by (or generally
convolved with) some filter, and then to find a new filter that undoes the
effects of the previous filter, or as close as possible to that, since not all
filters are invertible.
In the basic case, then, this *is* approximately the same thing as
filter inversion,
although there are some fiddly cases when the kernel is noninvertible,
or the inverse transform is unstable.

Linear versions are (apparently) straightforward Wiener filters (more-or-less generalized Kalman filters; although I think they are historically prior. 🏗 make this precise.) Clearly you get deconvolution-like behaviour in state filters sometimes too. I should inspect the edges of these definitions to work out the precise intersection. Markovian case?

Non-linear “deconvolutions” are AFAIK not *strictly*
speaking deconvolution since convolution is a linear operation.
Anyway, that usage seems to be in the literature.
c.f. the “iterative Richardson-Lucy algorithm”.

See also compressive sensing, convolution kernels.

Tim Cornwell & Alan Bridle, 1996, Deconvolution Tutorial

linear example in python skimage with perfunctory explanation

…inverse filter based on the PSF (Point Spread Function), the prior regularisation (penalisation of high frequency) and the tradeoff between the data and prior adequacy.

## Deconvolution method in statistics

Curses! I thought that this weird obvious idea was actually a new idea of mine. Turns out it’s old. “Density deconvolution” is a keyword here, and it’s reasonably common in hierarchical models.

## References

*arXiv:1211.5608 [Cs, Math]*, November.

*2013 IEEE International Conference on Image Processing*, 586–90.

*Computer Vision – ECCV 2012*, edited by Andrew Fitzgibbon, Svetlana Lazebnik, Pietro Perona, Yoichi Sato, and Cordelia Schmid, 341–55. Lecture Notes in Computer Science 7577. Springer Berlin Heidelberg.

*ICASSP-38th International Conference on Acoustics, Speech, and Signal Processing-2013*.

*Geophysical Prospecting*24 (1): 141–97.

*Econometric Theory*27 (Special Issue 03): 546–81.

*Journal of the American Statistical Association*83 (404): 1184–86.

*Scandinavian Journal of Statistics*35 (2): 322–34.

*Journal of the Royal Statistical Society: Series B (Statistical Methodology)*, February, n/a–.

*The Annals of Statistics*36 (2): 665–85.

*Bernoulli*14 (2): 562–79.

*Proceedings of the National Academy of Sciences*110 (30): 12186–91.

*The Annals of Statistics*19 (3): 1257–72.

*Canadian Journal of Statistics*20 (2): 155–69.

*arXiv:2010.10876 [Cs]*, October.

*The Annals of Statistics*35 (4): 1535–58.

*IEEE Transactions on Image Processing*22 (6): 2138–50.

*IEEE Transactions on Acoustics, Speech and Signal Processing*38 (5): 814–24.

*Journal of the American Statistical Association*73 (364): 805–11.

*arXiv:1209.2082 [Cs]*, September.

*Biometrika*73 (3): 645–56.

*Inverse Problems*24 (1): 015003.

*IEEE Transactions on Acoustics, Speech, and Signal Processing*37 (7): 984–95.

*arXiv:1406.7444 [Cs]*, June.

*Statistics*21 (2): 169–84.

*Proceedings of the IEEE*63 (4): 678–92.

*IEEE Signal Processing Letters*20 (5): 491–94.

## No comments yet. Why not leave one?