An interesting Bayesian functional regression trick based on the so-called
NB, the
Li, O’Connor, and Lan (2023):
Regularization is one of the most fundamental topics in optimization, statistics and machine learning. To get sparsity in estimating a parameter
, an penalty term, , is usually added to the objective function. What is the probabilistic distribution corresponding to such penalty? What is the correct stochastic process corresponding to when we model functions ? This is important for statistically modeling high-dimensional objects such as images, with penalty to preserve certain properties, e.g. edges in the image. In this work, we generalize the -exponential distribution (with density proportional to) to a stochastic process named -exponential ( ) process that corresponds to the regularization of functions. The key step is to specify consistent multivariate -exponential distributions by choosing from a large family of elliptic contour distributions. The work is closely related to Besov process which is usually defined in terms of series. Q-EP can be regarded as a definition of Besov process with explicit probabilistic formulation, direct control on the correlation strength, and tractable prediction formula. From the Bayesian perspective, QEP provides a flexible prior on functions with sharper penalty than the commonly used Gaussian process (GP, ). We compare GP, Besov and Q-EP in modeling functional data, reconstructing images and solving inverse problems and demonstrate the advantage of our proposed methodology.