Angluin, Dana. 1988.
“Identifying Languages from Stochastic Examples.” No. YALEU/DCS/RR-614.
Arisoy, Ebru, Tara N. Sainath, Brian Kingsbury, and Bhuvana Ramabhadran. 2012. “Deep Neural Network Language Models.” In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-Gram Model? On the Future of Language Modeling for HLT, 20–28. WLM ’12. Montreal, Canada: Association for Computational Linguistics.
Autebert, Jean-Michel, Jean Berstel, and Luc Boasson. 1997.
“Context-Free Languages and Pushdown Automata.” In
Handbook of Formal Languages, Vol. 1, edited by Grzegorz Rozenberg and Arto Salomaa, 111–74. New York, NY, USA: Springer-Verlag New York, Inc.
Baeza-Yates, Ricardo, and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. 1st ed. Addison Wesley.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021.
“On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” In
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM.
Bengio, Yoshua, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003.
“A Neural Probabilistic Language Model.” Journal of Machine Learning Research 3 (Feb): 1137–55.
Berstel, Jean, and Luc Boasson. 1990.
“Transductions and Context-Free Languages.” In
Handbook of Theoretical Computer Science, Vol. A: Algorithms and Complexity, edited by J. van Leeuwen, Albert R. Meyer, M. Nivat, Matthew Paterson, and D. Perrin, 1–278.
Blazek, Paul J., and Milo M. Lin. 2020.
“A Neural Network Model of Perception and Reasoning.” arXiv:2002.11319 [Cs, q-Bio], February.
Bolhuis, Johan J., Ian Tattersall, Noam Chomsky, and Robert C. Berwick. 2014.
“How Could Language Have Evolved?” PLoS Biol 12 (8): e1001934.
Booth, Taylor L, and R.A. Thompson. 1973.
“Applying Probability Measures to Abstract Languages.” IEEE Transactions on Computers C-22 (5): 442–50.
Bottou, Leon. 2011.
“From Machine Learning to Machine Reasoning.” arXiv:1102.1808 [Cs], February.
Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020.
“Language Models Are Few-Shot Learners.” arXiv:2005.14165 [Cs], June.
Casacuberta, Francisco, and Colin de la Higuera. 2000.
“Computational Complexity of Problems on Probabilistic Grammars and Transducers.” In
Grammatical Inference: Algorithms and Applications, edited by Arlindo L. Oliveira, 1891:15–24. Berlin, Heidelberg: Springer Berlin Heidelberg.
Charniak, Eugene. 1996. Statistical Language Learning. Reprint. A Bradford Book.
Chater, Nick, and Christopher D Manning. 2006.
“Probabilistic Models of Language Processing and Acquisition.” Trends in Cognitive Sciences 10 (7): 335–44.
Cho, Kyunghyun, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014.
“Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation.” In
EMNLP 2014.
Clark, Alexander, and Rémi Eyraud. 2005.
“Identification in the Limit of Substitutable Context-Free Languages.” In
Algorithmic Learning Theory, edited by Sanjay Jain, Hans Simon, and Etsuji Tomita, 3734:283–96. Lecture Notes in Computer Science. Springer Berlin / Heidelberg.
Clark, Alexander, Christophe Costa Florêncio, and Chris Watkins. 2006.
“Languages as Hyperplanes: Grammatical Inference with String Kernels.” In
Machine Learning: ECML 2006, edited by Johannes Fürnkranz, Tobias Scheffer, and Myra Spiliopoulou, 90–101. Lecture Notes in Computer Science 4212. Springer Berlin Heidelberg.
Clark, Alexander, Christophe Costa Florêncio, Chris Watkins, and Mariette Serayet. 2006.
“Planar Languages and Learnability.” In
Grammatical Inference: Algorithms and Applications, edited by Yasubumi Sakakibara, Satoshi Kobayashi, Kengo Sato, Tetsuro Nishino, and Etsuji Tomita, 148–60. Lecture Notes in Computer Science 4201. Springer Berlin Heidelberg.
Collins, Michael, and Nigel Duffy. 2002.
“Convolution Kernels for Natural Language.” In
Advances in Neural Information Processing Systems 14, edited by T. G. Dietterich, S. Becker, and Z. Ghahramani, 625–32. MIT Press.
Gold, E Mark. 1967.
“Language Identification in the Limit.” Information and Control 10 (5): 447–74.
Gonzalez, R. C., and M. G. Thomason. 1978. Syntactic Pattern Recognition: An Introduction. Addison Wesley Publishing Company.
Grefenstette, Edward, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015.
“Learning to Transduce with Unbounded Memory.” arXiv:1506.02516 [Cs], June.
Hopcroft, John E., and Jeffrey D. Ullman. 1979. Introduction to Automata Theory, Languages and Computation. 1st ed. Addison-Wesley Publishing Company.
Khalifa, Ahmed, Gabriella A. B. Barros, and Julian Togelius. 2019.
“DeepTingle.” arXiv.
Kontorovich, Leonid (Aryeh), Corinna Cortes, and Mehryar Mohri. 2008.
“Kernel Methods for Learning Languages.” Theoretical Computer Science, Algorithmic Learning Theory, 405 (3): 223–36.
Kontorovich, Leonid, Corinna Cortes, and Mehryar Mohri. 2006.
“Learning Linearly Separable Languages.” In
Algorithmic Learning Theory, edited by José L. Balcázar, Philip M. Long, and Frank Stephan, 288–303. Lecture Notes in Computer Science 4264. Springer Berlin Heidelberg.
Lafferty, John D., Andrew McCallum, and Fernando C. N. Pereira. 2001.
“Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data.” In
Proceedings of the Eighteenth International Conference on Machine Learning, 282–89. ICML ’01. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Lamb, Luis C., Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, and Moshe Vardi. 2020.
“Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective.” In
IJCAI 2020.
Lipton, Zachary C., John Berkowitz, and Charles Elkan. 2015.
“A Critical Review of Recurrent Neural Networks for Sequence Learning.” arXiv:1506.00019 [Cs], May.
Manning, Christopher D. 2002. “Probabilistic Syntax.” In Probabilistic Linguistics, 289–341. Cambridge, MA: MIT Press.
Manning, Christopher D, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press.
Manning, Christopher D, and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. Cambridge, Mass.: MIT Press.
Mikolov, Tomáš, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010.
“Recurrent Neural Network Based Language Model.” In
Eleventh Annual Conference of the International Speech Communication Association.
Mikolov, Tomas, Quoc V. Le, and Ilya Sutskever. 2013.
“Exploiting Similarities Among Languages for Machine Translation.” arXiv:1309.4168 [Cs], September.
Mitra, Bhaskar, and Nick Craswell. 2017.
“Neural Models for Information Retrieval.” arXiv:1705.01509 [Cs], May.
Mohri, Mehryar, Fernando Pereira, and Michael Riley. 1996.
“Weighted Automata in Text and Speech Processing.” In
Proceedings of the 12th Biennial European Conference on Artificial Intelligence (ECAI-96), Workshop on Extended Finite State Models of Language. Budapest, Hungary: John Wiley and Sons, Chichester.
———. 2002.
“Weighted Finite-State Transducers in Speech Recognition.” Computer Speech & Language 16 (1): 69–88.
O’Donnell, Timothy J., Joshua B. Tenenbaum, and Noah D. Goodman. 2009.
“Fragment Grammars: Exploring Computation and Reuse in Language,” March.
Pennington, Jeffrey, Richard Socher, and Christopher D. Manning. 2014.
“GloVe: Global Vectors for Word Representation.” Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014) 12.
Pillutla, Krishna, Lang Liu, John Thickstun, Sean Welleck, Swabha Swayamdipta, Rowan Zellers, Sewoong Oh, Yejin Choi, and Zaid Harchaoui. 2022.
“MAUVE Scores for Generative Models: Theory and Practice.” arXiv.
Qi, Peng, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020.
“Stanza: A Python Natural Language Processing Toolkit for Many Human Languages.” arXiv:2003.07082 [Cs], March.
Salakhutdinov, Ruslan. 2015.
“Learning Deep Generative Models.” Annual Review of Statistics and Its Application 2 (1): 361–85.
Schlag, Imanol, and Jürgen Schmidhuber. 2019.
“Learning to Reason with Third-Order Tensor Products.” arXiv:1811.12143 [Cs, Stat], January.
Solan, Zach, David Horn, Eytan Ruppin, and Shimon Edelman. 2005.
“Unsupervised Learning of Natural Languages.” Proceedings of the National Academy of Sciences of the United States of America 102 (33): 11629–34.
Sutton, Charles, Andrew McCallum, and Khashayar Rohanimanesh. 2007.
“Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data.” Journal of Machine Learning Research 8 (May): 693–723.
Wetherell, C. S. 1980.
“Probabilistic Languages: A Review and Some Open Questions.” ACM Comput. Surv. 12 (4): 361–79.
Wolff, J Gerard. 2000. “Syntax, Parsing and Production of Natural Language in a Framework of Information Compression by Multiple Alignment, Unification and Search.” Journal of Universal Computer Science 6 (8): 781–829.
No comments yet. Why not leave one?