4.6 Article

Efficient Alternating Least Squares Algorithms for Low Multilinear Rank Approximation of Tensors

Journal

JOURNAL OF SCIENTIFIC COMPUTING
Volume 87, Issue 3, Pages -

Publisher

SPRINGER/PLENUM PUBLISHERS
DOI: 10.1007/s10915-021-01493-0

Keywords

Low multilinear rank approximation; Truncated Tucker decomposition; Alternating least squares; Parallelization

Funding

  1. Guangdong Key RD Project [2019B121204008]
  2. BeijingNatural Science Foundation [JQ18001]
  3. Beijing Academy of Artificial Intelligence

Ask authors/readers for more resources

A new class of truncated HOSVD algorithms based on alternating least squares (ALS) is proposed in this paper for efficiently computing the low multilinear rank approximation of tensors. These ALS-based approaches eliminate the redundant computations of the singular vectors of intermediate matrices, thus avoiding data explosion issue and providing adjustable convergence tolerance, as well as intrinsic parallelizability on high-performance computers. Theoretical analysis shows that the ALS iteration in the proposed algorithms is q-linear convergent with wide convergence region, and numerical experiments demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
The low multilinear rank approximation, also known as the truncated Tucker decomposition, has been extensively utilized in many applications that involve higher-order tensors. Popular methods for low multilinear rank approximation usually rely directly on matrix SVD, therefore often suffer from the notorious intermediate data explosion issue and are not easy to parallelize, especially when the input tensor is large. In this paper, we propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors. The proposed ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion. Also, the new methods are more flexible with adjustable convergence tolerance and are intrinsically parallelizable on high-performance computers. Theoretical analysis reveals that the ALS iteration in the proposed algorithms is q-linear convergent with a relatively wide convergence region. Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available