Somewhere between computational symbolic mathematics and automated proof assistants and the modern large language models are models that can solve mathematical problems more effectively than can my feeble brain.
Watch this space.
Incoming
- We may finally crack Maths. But should we?
- Improving Mathematical Reasoning with Process Supervision
- FranxYao/chain-of-thought-hub: Benchmarking large language models’ complex reasoning ability with chain-of-thought prompting/ Towards Complex Reasoning: the Polaris of Large Language Models (Fu et al. 2023)
References
Bubeck, Sébastien, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, et al. 2023. “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.” arXiv.
Clark, Peter, Oyvind Tafjord, and Kyle Richardson. 2020. “Transformers as Soft Reasoners over Language.” In IJCAI 2020.
Fu, Yao, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. 2023. “Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models’ Reasoning Performance.”
Garcez, Artur d’Avila, and Luis C. Lamb. 2020. “Neurosymbolic AI: The 3rd Wave.” arXiv.
Lample, Guillaume, and François Charton. 2019. “Deep Learning for Symbolic Mathematics.” arXiv:1912.01412 [Cs], December.
Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners,” 24.
Zhang, Yi, Arturs Backurs, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, and Tal Wagner. 2022. “Unveiling Transformers with LEGO: A Synthetic Reasoning Task.” arXiv.
No comments yet. Why not leave one?