# Neural nets that do symbolic maths

December 9, 2019 — June 14, 2023

compsci

language

machine learning

meta learning

networks

neural nets

NLP

stringology

Somewhere between computational symbolic mathematics and automated proof assistants and the modern large language models are models that can solve mathematical problems more effectively than can my feeble brain.

Watch this space.

## 1 Incoming

- We may finally crack Maths. But should we?
- Improving Mathematical Reasoning with Process Supervision
- FranxYao/chain-of-thought-hub: Benchmarking large language models’ complex reasoning ability with chain-of-thought prompting/ Towards Complex Reasoning: the Polaris of Large Language Models (Fu et al. 2023)

## 2 References

Bubeck, Chandrasekaran, Eldan, et al. 2023. “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.”

Clark, Tafjord, and Richardson. 2020. “Transformers as Soft Reasoners over Language.” In

*IJCAI 2020*.
Fu, Ou, Chen, et al. 2023. “Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models’ Reasoning Performance.”

Garcez, and Lamb. 2020. “Neurosymbolic AI: The 3rd Wave.”

Lample, and Charton. 2019. “Deep Learning for Symbolic Mathematics.”

*arXiv:1912.01412 [Cs]*.
Radford, Wu, Child, et al. 2019. “Language Models Are Unsupervised Multitask Learners.”

Zhang, Backurs, Bubeck, et al. 2022. “Unveiling Transformers with LEGO: A Synthetic Reasoning Task.”