Learning stack machines, random access machines, nested hierarchical parsing machines, Turing machines and whatever other automata-with-memory that you wish, from data. In other words, teaching computers to program themselves, via a deep learning formalism.
This is a kind of obvious idea and there are some charming toy examples. Indeed this is wort-of what we have traditionally imagined AI might do.
Obviously a hypothetical superhuman Artificial General Intelligence would be good at handling problems; It’s not the absolute hippest research area right now though, on account of being hard in general, just like we always imagined from earlier attempts. Some progress has been made. My sense is that most of the hyped research that looks like differentiable computer learning is in the slightly-better-contained area of reinforcement learning where more progress can be made, or in the hot area of transformer networks which are harder to explain but solve the same kind of troubles.
Related: grammatical inference.
Google branded: Differentiable neural computers.
Christopher Olah’s Characteristically pedagogic intro
Adrian Colyer’s introduction to neural Turing machines.
Andrej Karpathy’s memory machine list has some good starting point.
Facebook’s GTN might be a tool here:
GTN is an open source framework for automatic differentiation with a powerful, expressive type of graph called weighted finite-state transducers (WFSTs). Just as PyTorch provides a framework for automatic differentiation with tensors, GTN provides such a framework for WFSTs. AI researchers and engineers can use GTN to more effectively train graph-based machine learning models.