Grammar induction
May 30, 2011 — August 9, 2021
Mathematically speaking, inferring the “formal language” which can describe a set of expressions. In the slightly looser sense used by linguists studying natural human language, discovering the syntactic rules of a given language, which is kinda the same thing but with every term sloppier, and the subject matter itself messier.
I also care about probabilistic grammars, i.e. assigning measures to these things.
See also design grammars, iterated function systems and my defunct research proposal in this area, Cosma Shalizi’s inevitable mention in 3, 2, 1… go.
1 Classical computation versus generative modeling
In recent times this field has changed. See the nice position paper by Peter Norvig, On Chomsky, and statistical versus explanatory models of natural language syntax. Full of sick burns.
Basically, Chomsky says
It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.
Norvig then says, actually, opaque, predictive approximations are OK and scientifically interesting. That was 2011 and since then, the scruffy, dense, opaque, predictive models have continued to be ascendant in language processing, particularly transformer networks, which do a disturbingly good job of handling language without an explicit grammar. Is something different going on? Maybe (Kleinberg and Mullainathan 2024).
2 Design grammars
This is already a crazily complex area, and being naturally perverse, I am interested in an especially esoteric corner of it, to wit, grammars of things that aren’t speech; inferring design grammars, say, could allow you to produce more things off the same “basic plan” from some examples of the thing. Look at enough trees and you know how to build the rest of the forest, that kind of thing. That is called inverse procedural modelling, apparently. I’m especially interested in things expressed not as a sequence of symbols from a finite alphabet — i.e. not over the free monoid, or in a partially-observed context.
Normally design grammars deal with simple languages, such as, say “regular” languages. I’m interested in things a rung or two up the Chomsky hierarchy — context-free grammars, maybe even context-sensitive ones.