Orthogonally decomposable tensors


I know nothing about orthogonally decomposable tensors, but they look at a glance to generalise your usual linear algebra in a way useful for the statistical inference of mixture models, while nonetheless being more computationally tractable than your garden variety tensor methods, which would be useful if it is indeed so.

Anandkumar, Anima, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. 2015. “Tensor Decompositions for Learning Latent Variable Models (A Survey for ALT).” In Algorithmic Learning Theory, edited by Kamalika Chaudhuri, CLAUDIO GENTILE, and Sandra Zilles, 19–38. Lecture Notes in Computer Science 9355. Springer International Publishing. https://doi.org/10.1007/978-3-319-24486-0_2.

Belkin, Mikhail, Luis Rademacher, and James Voss. 2016. “Basis Learning as an Algorithmic Primitive.” In Journal of Machine Learning Research, 446–87. http://www.jmlr.org/proceedings/papers/v49/belkin16.html.

Rabusseau, Guillaume, and François Denis. 2014. “Learning Negative Mixture Models by Tensor Decompositions,” March. http://arxiv.org/abs/1403.4224.

Robeva, E. 2016. “Orthogonal Decomposition of Symmetric Tensors.” SIAM Journal on Matrix Analysis and Applications 37 (1): 86–102. https://doi.org/10.1137/140989340.

Robeva, Elina, and Anna Seigal. 2016. “Singular Vectors of Orthogonally Decomposable Tensors,” March. http://arxiv.org/abs/1603.09004.

Tenenbaum, J. B., and W. T. Freeman. 2000. “Separating Style and Content with Bilinear Models.” Neural Computation 12 (6): 1247–83. https://doi.org/10.1162/089976600300015349.