Transformer networks and causality
December 20, 2017 — March 21, 2023
language
machine learning
meta learning
neural nets
NLP
stringology
time series
Placeholder, for exploring the idea that transformers or their ilk might be good at actual general causal inference.
1 Incoming
2 References
Guo, Cheng, Li, et al. 2020. “A Survey of Learning Causality with Data: Problems and Methods.” ACM Computing Surveys.
Melnychuk, Frauen, and Feuerriegel. 2022. “Causal Transformer for Estimating Counterfactual Outcomes.” In Proceedings of the 39th International Conference on Machine Learning.
Ortega, Kunesch, Delétang, et al. 2021. “Shaking the Foundations: Delusions in Sequence Models for Interaction and Control.” arXiv:2110.10819 [Cs].
Willig, Zečević, Dhami, et al. 2022. “Can Foundation Models Talk Causality?”
Zečević, Willig, Dhami, et al. 2023. “Causal Parrots: Large Language Models May Talk Causality But Are Not Causal.” Transactions on Machine Learning Research.