Meta learning

Few-shot learning, learning fast weights, learning to learn



Placeholder for what we now call few shot learning.

Schmidhuber discusses this in terms of Neural nets learn to program neural nets with with fast weights and dates it to the 1990s (Schmidhuber 1992) and relates it (Schlag, Irie, and Schmidhuber 2021) to transformer models.

More mainstream, presentations in terms of meta-learning:

References

Antoniou, Antreas, Harrison Edwards, and Amos Storkey. 2019. “How to Train Your MAML.” arXiv:1810.09502 [cs, Stat], March. http://arxiv.org/abs/1810.09502.
Arnold, Sébastien M. R., Praateek Mahajan, Debajyoti Datta, Ian Bunner, and Konstantinos Saitas Zarkias. 2020. “Learn2learn: A Library for Meta-Learning Research.” arXiv:2008.12284 [cs, Stat], August. http://arxiv.org/abs/2008.12284.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” arXiv:2005.14165 [cs], June. http://arxiv.org/abs/2005.14165.
Erven, Tim van, and Wouter M Koolen. 2016. “MetaGrad: Multiple Learning Rates in Online Learning.” In Advances in Neural Information Processing Systems 29, edited by D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, 3666–74. Curran Associates, Inc. http://papers.nips.cc/paper/6268-metagrad-multiple-learning-rates-in-online-learning.pdf.
Fiebrink, Rebecca, Dan Trueman, and Perry R. Cook. 2009. “A Metainstrument for Interactive, on-the-Fly Machine Learning.” In Proceefdings of NIME, 2:3. http://vigliensoni.com/BUP/McGILL/1_THESIS/writing/PAPERS/Fiebrink09_NIME.pdf.
Finn, Chelsea, Pieter Abbeel, and Sergey Levine. 2017. “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks.” arXiv:1703.03400 [cs], July. http://arxiv.org/abs/1703.03400.
Künzel, Sören R., Jasjeet S. Sekhon, Peter J. Bickel, and Bin Yu. 2019. “Metalearners for Estimating Heterogeneous Treatment Effects Using Machine Learning.” Proceedings of the National Academy of Sciences 116 (10): 4156–65. https://doi.org/10.1073/pnas.1804597116.
Lee, Kwonjoon, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. 2019. “Meta-Learning with Differentiable Convex Optimization,” April. https://arxiv.org/abs/1904.03758v2.
Medasani, Bharat, Anthony Gamst, Hong Ding, Wei Chen, Kristin A. Persson, Mark Asta, Andrew Canning, and Maciej Haranczyk. 2016. “Predicting defect behavior in B2 intermetallics by merging ab initio modeling and machine learning.” npj Computational Materials 2 (1): 1. https://doi.org/10.1038/s41524-016-0001-z.
Munkhdalai, Tsendsuren, Alessandro Sordoni, Tong Wang, and Adam Trischler. 2019. “Metalearned Neural Memory.” In Advances In Neural Information Processing Systems. http://arxiv.org/abs/1907.09720.
Pestourie, Raphaël, Youssef Mroueh, Thanh V. Nguyen, Payel Das, and Steven G. Johnson. 2020. “Active Learning of Deep Surrogates for PDEs: Application to Metasurface Design.” Npj Computational Materials 6 (1): 1–7. https://doi.org/10.1038/s41524-020-00431-2.
Rajeswaran, Aravind, Chelsea Finn, Sham Kakade, and Sergey Levine. 2019. “Meta-Learning with Implicit Gradients,” September. https://arxiv.org/abs/1909.04630v1.
Schlag, Imanol, Kazuki Irie, and Jürgen Schmidhuber. 2021. “Linear Transformers Are Secretly Fast Weight Programmers.” arXiv:2102.11174 [cs], June. http://arxiv.org/abs/2102.11174.
Schmidhuber, Jürgen. 1992. “Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks.” Neural Computation 4 (1): 131–39. https://doi.org/10.1162/neco.1992.4.1.131.
Uttl, Bob, Carmela A. White, and Daniela Wong Gonzalez. 2017. “Meta-Analysis of Faculty’s Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning Are Not Related.” Studies in Educational Evaluation, Evaluation of teaching: Challenges and promises, 54 (September): 22–42. https://doi.org/10.1016/j.stueduc.2016.08.007.

No comments yet. Why not leave one?

GitHub-flavored Markdown & a sane subset of HTML is supported.