Mathematically speaking, inferring the “formal language” which can describe a set of expressions. In the slightly looser sense used by linguists studying natural human language, discovering the syntactic rules of a given language, which is kinda the same thing but with every term sloppier, and the subject matter itself messier.

This is already a crazily complex area, and being naturally perverse, I am
interested in an especially esoteric corner of it, to whit,
grammars of things that *aren’t* speech;
inferring design grammars, say, could allow
you to produce more things off the same “basic plan” from some examples of the
thing.
Look at enough trees and you know how to build the rest of the forest, that
kind of thing.
That is called *inverse procedural modelling*, apparently.
I’m especially interested in things expressed not as a sequence of symbols from a finite alphabet -
i.e. not over the free monoid, or in a partially-observed context.
This is a boutique interest, although not entirely novel.

Fun aside:
In the 19th century it was fashionable to think of design as a grammatical thing, although I am not exactly clear on how similar their notion of *grammar* was to mine.

I also care about probabilistic grammars, i.e. assigning measures to these things.

Normally design grammars deal with simple languages, such as, say “regular” languages. I’m interested in things a rung or two up the Chomsky hierarchy - context-free grammars, maybe even context-sensitive ones.

See also design grammars, iterated function systems and my defunct research proposal in this area, Cosma Shalizi’s inevitable mention in 3, 2, 1… go

A nice position paper Peter Norvig on Chomsky and statistical versus explanatory models of natural language syntax. Full of sick burns.

Basically, Chomsky says

It’s true there’s been a lot of work on trying to apply statistical models to various linguistic problems. I think there have been some successes, but a lot of failures. There is a notion of success … which I think is novel in the history of science. It interprets success as approximating unanalyzed data.

Norvig then says, actually, opaque, predictive approximations are OK and scientifically interesting. That was 2011 and since then, the scruffy, dense, opaque, predictive models have continued to be ascendant in language processing, particularly transformer networks, which do a disturbingly good job of handling language without an explicit grammar.

## References

*Information and Computation*75 (2): 87–106.

*Complexity: Hierarchical Structures and Scaling in Physics*. Cambridge Nonlinear Science Series. Cambridge University Press.

*PLoS Biol*12 (8): e1001934.

*29th International Conference on Machine Learning*.

*Grammatical Inference: Algorithms and Applications*, edited by Arlindo L. Oliveira, 1891:15–24. Berlin, Heidelberg: Springer Berlin Heidelberg.

*In Proceedings of the Thirteenth National Conference on Artificial Intelligence*, 1031–36.

*Statistical Language Learning*. Reprint. A Bradford Book.

*Trends in Cognitive Sciences*10 (7): 335–44.

*IRE Transactions on Information Theory*2 (3): 113–24.

*Syntactic Structures*. 2nd ed. Walter de Gruyter.

*Algorithmic Learning Theory*, edited by Sanjay Jain, Hans Simon, and Etsuji Tomita, 3734:283–96. Lecture Notes in Computer Science. Springer Berlin / Heidelberg.

*IJCAI 2020*.

*Proceedings of the 30th International Conference on Machine Learning (ICML-13)*, 1166–74.

*Cognitive Science*14: 179–211.

*Machine Learning*7: 195–225.

*Cognition*48: 71–99.

*Annals of the New York Academy of Sciences*1016: 153–70.

*arXiv:1405.1533 [Cs, Math, Stat]*, May.

*BioNanoScience*1 (4): 153–61.

*Information and Control*10 (5): 447–74.

*Proceedings of the Conference on Uncertainty in Artificial Intelligence*.

*Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing*, 1040:203–16. Lecture Notes in Computer Science. London, UK, UK: Springer-Verlag.

*arXiv:2010.01003 [Cs, Stat]*, October.

*Graph Transformations*, edited by Hartmut Ehrig, Gregor Engels, Francesco Parisi-Presicce, and Grzegorz Rozenberg, 3256:243–46. Lecture Notes in Computer Science. Springer Berlin / Heidelberg.

*Advances in Pattern Recognition*, edited by Francesc Ferri, José Iñesta, Adnan Amin, and Pavel Pudil, 1876:28–31. Lecture Notes in Computer Science. Springer Berlin / Heidelberg.

*Pattern Recognition*38 (9): 1332–48.

*Grammatical Inference : Learning Automata and Grammars*. Cambridge; New York: Cambridge University Press.

*Implementation and Application of Automata*, edited by Jacques Farré, Igor Litovsky, and Sylvain Schmitz, 3845:345–46. Lecture Notes in Computer Science. Springer Berlin / Heidelberg.

*Introduction to Automata Theory, Languages and Computation*. 1st ed. Addison-Wesley Publishing Company.

*Cognition*120 (3): 380–90.

*Foundations of Language: Brain, Meaning, Grammar, Evolution*. Oxford University Press, USA.

*PLoS Comput Biol*7 (3): –1001108.

*Philosophy of Science*71 (4): 571–92.

*The grammar of ornament*. London : Bernard Quaritch, 15 Piccadilly.

*1997 Workshop on Automata Induction Grammatical Inference and Language Acquisition*. Citeseer.

*Algorithmic Learning Theory*, edited by José L. Balcázar, Philip M. Long, and Frank Stephan, 288–303. Lecture Notes in Computer Science 4264. Springer Berlin Heidelberg.

*Computer Speech & Language*4 (1): 35–56.

*IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics*34 (4): 1658–65.

*Probabilistic Linguistics*, 289–341. Cambridge, MA: MIT Press.

*Foundations of Statistical Natural Language Processing*. Cambridge, Mass.: MIT Press.

*Handbook of ornament; a grammar of art, industrial and architectural designing in all its branches, for practical as well as theoretical use*. New York, B. Hessling.

*Theoretical Computer Science*.

*Science*291: 114–18.

*Behavioral and Brain Sciences*13: 707.

*SIGGRAPH Comput. Graph.*, 18:1–10. ACM.

*Neural Networks*20 (3): 424–32.

*Commun. ACM*27 (11): 1134–42.

*J. ACM*56 (1): 3:1–21.

*IEEE Transactions on Pattern Analysis and Machine Intelligence*27 (7): 1013–25.

*Advances in Neural Information Processing Systems 28*, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, R. Garnett, and R. Garnett, 2755–63. Curran Associates, Inc.

## No comments yet. Why not leave one?