Generalise your usual linear regression to multilinear regression. Useful tool: tensor decompositions. Tensorly I think is the main implementation of note.
References
Anandkumar, Anima, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. 2015. βTensor Decompositions for Learning Latent Variable Models (A Survey for ALT).β In Algorithmic Learning Theory, edited by Kamalika Chaudhuri, Claudio Gentile, and Sandra Zilles, 19β38. Lecture Notes in Computer Science. Springer International Publishing.
Anandkumar, Animashree, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. 2014. βTensor Decompositions for Learning Latent Variable Models.β The Journal of Machine Learning Research 15 (1): 2773β2832.
Bi, Xuan, Xiwei Tang, Yubai Yuan, Yanqing Zhang, and Annie Qu. 2021. βTensors in Statistics.β Annual Review of Statistics and Its Application 8 (1): 345β68.
Cui, Tiangang, and Sergey Dolgov. 2022. βDeep Composition of Tensor-Trains Using Squared Inverse Rosenblatt Transports.β Foundations of Computational Mathematics 22 (6): 1863β1922.
Kossaifi, Jean, Yannis Panagakis, Anima Anandkumar, and Maja Pantic. 2019. βTensorLy: Tensor Learning in Python.β Journal of Machine Learning Research 20 (26): 1β6.
No comments yet. Why not leave one?