Neural process regression

Gaussian process regression, but with neural nets approximating the kernel function (?) and some other tricky bits, I think. This has been pitched for me specifically as a meta-learning technique.

The trick is we learn some encoder and decoder networks such that they recover some interesting/useful random function distributions.

Jha et al. (2022):

The uncertainty-aware Neural Process Family (NPF) (Garnelo, Rosenbaum, et al. 2018; Garnelo, Schwarz, et al. 2018) aims to address the aforementioned limitations of the Bayesian paradigm by exploiting the function approximation capabilities of deep neural networks to learn a family of real-world data-generating processes, a.k.a., stochastic Gaussian processes (GPs) (Rasmussen and Williams 2006). Neural processes (NPs) define uncertainties in predictions in terms of a conditional distribution over functions given the context (observations) \(C\) drawn from a distribution of functions. Here, each function \(f\) is parameterized using neural networks and can be thought of capturing an underlying data generating stochastic process.

To model the variability of \(f\) based on the variability of the generated data, NPs concurrently train and test their learned parameters on multiple datasets. This endows them with the capability to meta learn their predictive distributions over functions. The meta-learning setup makes NPs fundamentally distinguished from other non-Bayesian uncertainty-aware learning frameworks like stochastic GPs. NPF members thus combine the best of meta learners, GPs and neural networks. Like GPs, NPs learn a distribution of functions, quickly adapt to new observations, and provide uncertainty measures given test time observations. Like neural networks, NPs learn function approximation from data directly besides being efficient at inference. To learn \(f\), NPs incorporate the encoder-decoder architecture that comprises a functional encoding of each observation point followed by the learning of a decoder function whose parameters are capable of unraveling the unobserved function realizations to approximate the outputs of \(f\)…. Despite their resemblance to NPs, the vanilla encoder-decoder networks traditionally based on CNNs, RNNs, and Transformers operate merely on pointwise inputs and clearly lack the incentive to meta learn representations for dynamically changing functions (imagine \(f\) changing over a continuum such as time) and their families. The NPF members not only improve upon these architectures to model functional input spaces and provide uncertainty-aware estimates but also offer natural benefits to a number of challenging real-world tasks. Our study brings into light the potential of NPF models for several such tasks including but not limited to the handling of missing data, handling off-the-grid data, allowing continual and active learning out-of-the-box, superior interpretation capabilities all the while leveraging a diverse range of task-specific inductive biases.

Cool connection — deep sets arise in Neural processes, since the conditionals are required to be equivariant with exchangeable inputs.

Functional process

Not sure if this is different again? (Louizos et al. 2019)

General stochastic process regression

A different way of relaxing the assumptions of GP regression is to assume non-Gaussian joints. See stochastic process regression.


Fortuin, Vincent. 2022. “Priors in Bayesian Deep Learning: A Review.” International Statistical Review 90 (3): 563–91.
Garnelo, Marta, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, and S. M. Ali Eslami. 2018. “Conditional Neural Processes.” arXiv:1807.01613 [Cs, Stat], July, 10.
Garnelo, Marta, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. 2018. “Neural Processes,” July.
Holderrieth, Peter, Michael J. Hutchinson, and Yee Whye Teh. 2021. “Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes.” In Proceedings of the 38th International Conference on Machine Learning, 4297–307. PMLR.
Jha, Saurav, Dong Gong, Xuesong Wang, Richard E. Turner, and Lina Yao. 2022. “The Neural Process Family: Survey, Applications and Perspectives.” arXiv.
Kim, Hyunjik, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. 2019. “Attentive Neural Processes.” arXiv.
Lin, Xixun, Jia Wu, Chuan Zhou, Shirui Pan, Yanan Cao, and Bin Wang. 2021. “Task-Adaptive Neural Process for User Cold-Start Recommendation.” In Proceedings of the Web Conference 2021, 1306–16. WWW ’21. New York, NY, USA: Association for Computing Machinery.
Louizos, Christos, Xiahan Shi, Klamer Schutte, and Max Welling. 2019. “The Functional Neural Process.” In Advances in Neural Information Processing Systems. Vol. 32. Curran Associates, Inc.
Rasmussen, Carl Edward, and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. Cambridge, Mass: MIT Press.
Singh, Gautam, Jaesik Yoon, Youngsung Son, and Sungjin Ahn. 2019. “Sequential Neural Processes.” arXiv:1906.10264 [Cs, Stat], June.
Ye, Zesheng, Jing Du, and Lina Yao. 2023. “Adversarially Contrastive Estimation of Conditional Neural Processes.” arXiv.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.