Differentiable learning of automata

October 14, 2016 — March 5, 2024

machine learning
making things
neural nets

Learning stack machines, random access machines, nested hierarchical parsing machines, Turing machines and whatever other automata-with-memory that you wish, from data. In other words, teaching computers to program themselves, via a deep learning formalism.

Figure 1: Differentiable pointers

This is a kind of obvious idea and there are some charming toy examples. Indeed this is wort-of what we have traditionally imagined AI might do.

Obviously a hypothetical superhuman Artificial General Intelligence would be good at handling computer-science problems; It’s not the absolute hippest research area right now though, on account of being hard in general, just like we always imagined from earlier attempts. Some progress has been made. My sense is that most of the hyped research that looks like differentiable computer learning is in the slightly-better-contained area of reinforcement learning where more progress can be made, or in the hot area of transformer networks which are harder to explain but solve the same kind of problems whilst looking different inside..

Related: grammatical inference.

1 Incoming

Blazek claims his neural networks implement predicate logic directly and yet are tractable which would be interesting to look into (Blazek and Lin 2021, 2020; Blazek, Venkatesh, and Lin 2021).

Google branded: Differentiable neural computers.

Christopher Olah’s Characteristically pedagogic intro

Adrian Colyer’s introduction to neural Turing machines.

Andrej Karpathy’s memory machine list.

Facebook’s GTN might solve this kind of problem:

GTN is an open source framework for automatic differentiation with a powerful, expressive type of graph called weighted finite-state transducers (WFSTs). Just as PyTorch provides a framework for automatic differentiation with tensors, GTN provides such a framework for WFSTs. AI researchers and engineers can use GTN to more effectively train graph-based machine learning models.

2 References

Blazek, and Lin. 2020. A Neural Network Model of Perception and Reasoning.” arXiv:2002.11319 [Cs, q-Bio].
———. 2021. Explainable Neural Networks That Simulate Reasoning.” Nature Computational Science.
Blazek, Venkatesh, and Lin. 2021. Deep Distilling: Automated Code Generation Using Explainable Deep Learning.” arXiv:2111.08275 [Cs].
Bottou. 2011. From Machine Learning to Machine Reasoning.” arXiv:1102.1808 [Cs].
Bubeck, Chandrasekaran, Eldan, et al. 2023. Sparks of Artificial General Intelligence: Early Experiments with GPT-4.”
Clark, Tafjord, and Richardson. 2020. Transformers as Soft Reasoners over Language.” In IJCAI 2020.
Ellis, Solar-Lezama, and Tenenbaum. 2016. Sampling for Bayesian Program Learning.” In Advances in Neural Information Processing Systems 29.
Garcez, and Lamb. 2020. Neurosymbolic AI: The 3rd Wave.”
Graves, Wayne, and Danihelka. 2014. Neural Turing Machines.” arXiv:1410.5401 [Cs].
Graves, Wayne, Reynolds, et al. 2016. Hybrid Computing Using a Neural Network with Dynamic External Memory.” Nature.
Grefenstette, Hermann, Suleyman, et al. 2015. Learning to Transduce with Unbounded Memory.” arXiv:1506.02516 [Cs].
Gulcehre, Chandar, Cho, et al. 2016. Dynamic Neural Turing Machine with Soft and Hard Addressing Schemes.” arXiv:1607.00036 [Cs].
Hannun, Pratap, Kahn, et al. 2020. Differentiable Weighted Finite-State Transducers.” arXiv:2010.01003 [Cs, Stat].
Ibarz, Kurin, Papamakarios, et al. 2022. A Generalist Neural Algorithmic Learner.”
Ikeda. 1989. Decentralized Control of Large Scale Systems.” In Three Decades of Mathematical System Theory: A Collection of Surveys at the Occasion of the 50th Birthday of Jan C. Willems. Lecture Notes in Control and Information Sciences.
Jaitly, Sussillo, Le, et al. 2015. A Neural Transducer.” arXiv:1511.04868 [Cs].
Kaiser, and Sutskever. 2015. Neural GPUs Learn Algorithms.” arXiv:1511.08228 [Cs].
Kim, and Bassett. 2022. A Neural Programming Language for the Reservoir Computer.” arXiv:2203.05032 [Cond-Mat, Physics:nlin].
Lai, Domke, and Sheldon. 2022. Variational Marginal Particle Filters.” In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics.
Lamb, Garcez, Gori, et al. 2020. Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective.” In IJCAI 2020.
Lample, and Charton. 2019. Deep Learning for Symbolic Mathematics.” arXiv:1912.01412 [Cs].
Looks, Herreshoff, Hutchins, et al. 2017. Deep Learning with Dynamic Computation Graphs.” In Proceedings of ICLR.
Perez, and Liu. 2016. Gated End-to-End Memory Networks.” arXiv:1610.04211 [Cs, Stat].
Putzky, and Welling. 2017. Recurrent Inference Machines for Solving Inverse Problems.” arXiv:1706.04008 [Cs].
Veličković, and Blundell. 2021. Neural Algorithmic Reasoning.” Patterns.
Wang, Xin, Chen, and Zhu. 2021. A Survey on Curriculum Learning.”
Wang, Junxiong, Gangavarapu, Yan, et al. 2024. MambaByte: Token-Free Selective State Space Model.”
Wang, Cheng, and Niepert. 2019. State-Regularized Recurrent Neural Networks.”
Wei, Fan, Carin, et al. 2017. An Inner-Loop Free Solution to Inverse Problems Using Deep Neural Networks.” arXiv:1709.01841 [Cs].
Weston, Chopra, and Bordes. 2014. Memory Networks.” arXiv:1410.3916 [Cs, Stat].
Wu, Tan, Wang, et al. 2024. Beyond Language Models: Byte Models Are Digital World Simulators.”
Zhang, Backurs, Bubeck, et al. 2022. Unveiling Transformers with LEGO: A Synthetic Reasoning Task.”