# Grammar induction

May 30, 2011 — August 9, 2021

Mathematically speaking, inferring the “formal language” which can describe a set of expressions. In the slightly looser sense used by linguists studying natural human language, discovering the syntactic rules of a given language, which is kinda the same thing but with every term sloppier, and the subject matter itself messier.

This is already a crazily complex area, and being naturally perverse, I am interested in an especially esoteric corner of it, to whit, grammars of things that *aren’t* speech; inferring design grammars, say, could allow you to produce more things off the same “basic plan” from some examples of the thing. Look at enough trees and you know how to build the rest of the forest, that kind of thing. That is called *inverse procedural modelling*, apparently. I’m especially interested in things expressed not as a sequence of symbols from a finite alphabet — i.e. not over the free monoid, or in a partially-observed context. This is a boutique interest, although not entirely novel.

Fun aside: In the 19th century it was fashionable to think of design as a grammatical thing, although I am not exactly clear on how similar their notion of *grammar* was to mine.

I also care about probabilistic grammars, i.e. assigning measures to these things.

Normally design grammars deal with simple languages, such as, say “regular” languages. I’m interested in things a rung or two up the Chomsky hierarchy - context-free grammars, maybe even context-sensitive ones.

See also design grammars, iterated function systems and my defunct research proposal in this area, Cosma Shalizi’s inevitable mention in 3, 2, 1… go

A nice position paper Peter Norvig on Chomsky and statistical versus explanatory models of natural language syntax. Full of sick burns.

Basically, Chomsky says

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

Norvig then says, actually, opaque, predictive approximations are OK and scientifically interesting. That was 2011 and since then, the scruffy, dense, opaque, predictive models have continued to be ascendant in language processing, particularly transformer networks, which do a disturbingly good job of handling language without an explicit grammar.

## 1 References

*Information and Computation*.

*Complexity: Hierarchical Structures and Scaling in Physics*. Cambridge Nonlinear Science Series.

*PLoS Biol*.

*29th International Conference on Machine Learning*.

*Grammatical Inference: Algorithms and Applications*.

*In Proceedings of the Thirteenth National Conference on Artificial Intelligence*.

*Statistical Language Learning*.

*Trends in Cognitive Sciences*.

*IRE Transactions on Information Theory*.

*Syntactic Structures*.

*Algorithmic Learning Theory*. Lecture Notes in Computer Science.

*IJCAI 2020*.

*Advances in Pattern Recognition*. Lecture Notes in Computer Science.

*Pattern Recognition*.

*Grammatical Inference : Learning Automata and Grammars*.

*Implementation and Application of Automata*. Lecture Notes in Computer Science.

*Transactions in GIS*.

*Proceedings of the 30th International Conference on Machine Learning (ICML-13)*.

*Cognitive Science*.

*Machine Learning*.

*Cognition*.

*Annals of the New York Academy of Sciences*.

*arXiv:1405.1533 [Cs, Math, Stat]*.

*BioNanoScience*.

*Information and Control*.

*Proceedings of the Conference on Uncertainty in Artificial Intelligence*.

*Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing*. Lecture Notes in Computer Science.

*arXiv:2010.01003 [Cs, Stat]*.

*Graph Transformations*. Lecture Notes in Computer Science.

*Introduction to Automata Theory, Languages and Computation*.

*Cognition*.

*Foundations of Language: Brain, Meaning, Grammar, Evolution*.

*PLoS Comput Biol*.

*Philosophy of Science*.

*The grammar of ornament*.

*1997 Workshop on Automata Induction Grammatical Inference and Language Acquisition*.

*Algorithmic Learning Theory*. Lecture Notes in Computer Science 4264.

*Computer Speech & Language*.

*IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*.

*Probabilistic Linguistics*.

*Foundations of Statistical Natural Language Processing*.

*Theoretical Computer Science*.

*Science*.

*Behavioral and Brain Sciences*.

*SIGGRAPH Comput. Graph.*

*Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology - UIST ’12*.

*Neural Networks*.

*Commun. ACM*.

*J. ACM*.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*.

*Advances in Neural Information Processing Systems 28*.