Orthogonally decomposable tensors


I know nothing about orthogonally decomposable tensors, but they look at a glance to generalise your usual linear algebra to multilinear regression models but have some extra assumptions that make them computationally tractable despite the generality.

I am unlikely to return to this, as other types of computational ridiculousness are my main pain point.

Anandkumar, Anima, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. 2015. “Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT).” In Algorithmic Learning Theory, edited by Kamalika Chaudhuri, CLAUDIO GENTILE, and Sandra Zilles, 19–38. Lecture Notes in Computer Science 9355. Springer International Publishing. https://doi.org/10.1007/978-3-319-24486-0_2.

Belkin, Mikhail, Luis Rademacher, and James Voss. 2016. “Basis Learning as an Algorithmic Primitive.” In Journal of Machine Learning Research, 446–87. http://www.jmlr.org/proceedings/papers/v49/belkin16.html.

Rabusseau, Guillaume, and François Denis. 2014. “Learning Negative Mixture Models by Tensor Decompositions.” March 17, 2014. http://arxiv.org/abs/1403.4224.

Robeva, E. 2016. “Orthogonal Decomposition of Symmetric Tensors.” SIAM Journal on Matrix Analysis and Applications 37 (1): 86–102. https://doi.org/10.1137/140989340.

Robeva, Elina, and Anna Seigal. 2016. “Singular Vectors of Orthogonally Decomposable Tensors.” March 29, 2016. http://arxiv.org/abs/1603.09004.

Tenenbaum, J. B., and W. T. Freeman. 2000. “Separating Style and Content with Bilinear Models.” Neural Computation 12 (6): 1247–83. https://doi.org/10.1162/089976600300015349.