Position encoding
Also Fourier features
January 21, 2021 — June 23, 2023
On passing relative location (or features derived from relative locations) into neural networks. Pops up often. That it pops up often is interesting but I am not sure if there is something general to say; I’m not even sure that the position encodings described here are even the saem kind of object. 🏗️🏗️🏗️
1 In transformers
Position encoding ends up being important in transformers (Dufter, Schmitt, and Schütze 2022).
2 In implicit representation networks
Implicit representation networks, such as PINNs and Neural radiance fields act mostly (or only) upon position features.
3 Fourier features
Encoding the position through its the sine and cosine. See Tancik et al. (2020) for some theory.
Connection to Fourier features in Gaussian Processes?
See also Fourier Feature Networks.
4 In basis decomposition networks
This idea is, I think, also implicit in any neural network that does basis decomposition, because basis functions encode a “location” in the same way that fourier features do.
5 In spatiotemporal networks
Not a headline, but spatiotemporal NNs typically use positional predictors, for example Fourier Neural Operators often pack position encoding in.
6 As a means of globally locating a local algorithm
Convnet-like NNs are local.