Position encoding

Also Fourier features

January 21, 2021 — June 23, 2023

approximation
dynamical systems
functional analysis
Hilbert space
machine learning
neural nets
Figure 1

On passing relative location (or features derived from relative locations) into neural networks. Pops up often. That it pops up often is interesting but I am not sure if there is something general to say; I’m not even sure that the position encodings described here are even the saem kind of object. 🏗️🏗️🏗️

1 In transformers

Position encoding ends up being important in transformers (Dufter, Schmitt, and Schütze 2022).

2 In implicit representation networks

Implicit representation networks, such as PINNs and Neural radiance fields act mostly (or only) upon position features.

3 Fourier features

Encoding the position through its the sine and cosine. See Tancik et al. (2020) for some theory.

Connection to Fourier features in Gaussian Processes?

See also Fourier Feature Networks.

4 In basis decomposition networks

This idea is, I think, also implicit in any neural network that does basis decomposition, because basis functions encode a “location” in the same way that fourier features do.

5 In spatiotemporal networks

Not a headline, but spatiotemporal NNs typically use positional predictors, for example Fourier Neural Operators often pack position encoding in.

6 As a means of globally locating a local algorithm

Convnet-like NNs are local.

7 Tooling

8 References

Chen, and Zhang. 2018. Learning Implicit Fields for Generative Shape Modeling.”
Dufter, Schmitt, and Schütze. 2022. Position Information in Transformers: An Overview.” Computational Linguistics.
Mescheder, Oechsle, Niemeyer, et al. 2018. Occupancy Networks: Learning 3D Reconstruction in Function Space.”
Mildenhall, Srinivasan, Tancik, et al. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.” arXiv:2003.08934 [Cs].
Park, Florence, Straub, et al. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Press, Smith, and Lewis. 2021. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.” arXiv:2108.12409 [Cs].
Sitzmann, Martel, Bergman, et al. 2020. Implicit Neural Representations with Periodic Activation Functions.” arXiv:2006.09661 [Cs, Eess].
Sitzmann, Zollhoefer, and Wetzstein. 2019. Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations.” Advances in Neural Information Processing Systems.
Stanley. 2007. Compositional Pattern Producing Networks: A Novel Abstraction of Development.” Genetic Programming and Evolvable Machines.
Tancik, Srinivasan, Mildenhall, et al. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains.” In Advances In Neural Information Processing Systems.
Wang, Li, Khabsa, et al. 2020. Linformer: Self-Attention with Linear Complexity.”