Neural nets for “implicit representations”



TBD. See Tancik et al. (2020).

References

Chen, Zhiqin, and Hao Zhang. 2018. Learning Implicit Fields for Generative Shape Modeling,” December.
Mescheder, Lars, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. 2018. Occupancy Networks: Learning 3d Reconstruction in Function Space,” December.
Mildenhall, Ben, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis.” arXiv:2003.08934 [Cs], August.
Park, Jeong Joon, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation,” January.
Press, Ofir, Noah A. Smith, and Mike Lewis. 2021. Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.” arXiv:2108.12409 [Cs], August.
Sitzmann, Vincent, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. 2020. Implicit Neural Representations with Periodic Activation Functions.” arXiv:2006.09661 [Cs, Eess], June.
Sitzmann, Vincent, Michael Zollhoefer, and Gordon Wetzstein. 2019. Scene Representation Networks: Continuous 3d-Structure-Aware Neural Scene Representations.” Advances in Neural Information Processing Systems 32: 1121–32.
Stanley, Kenneth O. 2007. Compositional Pattern Producing Networks: A Novel Abstraction of Development.” Genetic Programming and Evolvable Machines 8 (2): 131–62.
Tancik, Matthew, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains.” arXiv:2006.10739 [Cs], June.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.