# Neural process regression

Jha et al. (2022):

The uncertainty-aware Neural Process Family (NPF) aims to address the aforementioned limitations of the Bayesian paradigm by exploiting the function approximation capabilities of deep neural networks to learn a family of real-world data-generating processes, a.k.a., stochastic Gaussian processes (GPs) . Neural processes (NPs) define uncertainties in predictions in terms of a conditional distribution over functions given the context (observations) $$C$$ drawn from a distribution of functions. Here, each function $$f$$ is parameterized using neural networks and can be thought of capturing an underlying data generating stochastic process.

To model the variability of $$f$$ based on the variability of the generated data, NPs concurrently train and test their learned parameters on multiple datasets. This endows them with the capability to meta learn their predictive distributions over functions. The meta-learning setup makes NPs fundamentally distinguished from other non-Bayesian uncertainty-aware learning frameworks like stochastic GPs. NPF members thus combine the best of meta learners, GPs and neural networks. Like GPs, NPs learn a distribution of functions, quickly adapt to new observations, and provide uncertainty measures given test time observations. Like neural networks, NPs learn function approximation from data directly besides being efficient at inference. To learn $$f$$, NPs incorporate the encoder-decoder architecture that comprises a functional encoding of each observation point followed by the learning of a decoder function whose parameters are capable of unraveling the unobserved function realizations to approximate the outputs of $$f$$…. Despite their resemblance to NPs, the vanilla encoder-decoder networks traditionally based on CNNs, RNNs, and Transformers operate merely on pointwise inputs and clearly lack the incentive to meta learn representations for dynamically changing functions (imagine $$f$$ changing over a continuum such as time) and their families 5 The NPF members not only improve upon these architectures to model functional input spaces and provide uncertaintyaware estimates but also offer natural benefits to a number of challenging real-world tasks. Our study brings into light the potential of NPF models for several such tasks including but not limited to the handling of missing data, handling off-the-grid data, allowing continual and active learning out-of-the-box, superior interpretation capabilities all the while leveraging a diverse range of task-specific inductive biases.

## References

Garnelo, Marta, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, and S. M. Ali Eslami. 2018. arXiv:1807.01613 [Cs, Stat], July, 10.
Garnelo, Marta, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. 2018. July.
Jha, Saurav, Dong Gong, Xuesong Wang, Richard E. Turner, and Lina Yao. 2022. arXiv.
Louizos, Christos, Xiahan Shi, Klamer Schutte, and Max Welling. 2019. arXiv:1906.08324 [Cs, Stat], June.
Rasmussen, Carl Edward, and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. Cambridge, Mass: MIT Press.
Singh, Gautam, Jaesik Yoon, Youngsung Son, and Sungjin Ahn. 2019. arXiv:1906.10264 [Cs, Stat], June.

### No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.