Journal
IEEE TRANSACTIONS ON SIGNAL PROCESSING
Volume 64, Issue 19, Pages 5052-5065Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TSP.2016.2576427
Keywords
Constrained matrix/tensor factorization; non-negative matrix/tensor factorization; canonical polyadic decomposition; PARAFAC; matrix/tensor completion; dictionary learning; alternating optimization; alternating direction method of multipliers
Categories
Funding
- NSF [IIS-1247632, IIS-1447788]
- UM Informatics Institute fellowship
- Div Of Information & Intelligent Systems
- Direct For Computer & Info Scie & Enginr [1447788, 1247632] Funding Source: National Science Foundation
Ask authors/readers for more resources
We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing andmachine learning. The new framework is a hybrid between alternating optimization (AO) and the alternating direction method of multipliers (ADMM): each matrix factor is updated in turn, using ADMM, hence the name AO-ADMM. This combination can naturally accommodate a great variety of constraints on the factor matrices, and almost all possible loss measures for the fitting. Computation caching and warm start strategies are used to ensure that each update is evaluated efficiently, while the outer AO framework exploits recent developments in block coordinate descent (BCD)-type methods which help ensure that every limit point is a stationary point, as well as faster and more robust convergence in practice. Three special cases are studied in detail: non-negative matrix/tensor factorization, constrained matrix/tensor completion, and dictionary learning. Extensive simulations and experiments with real data are used to showcase the effectiveness and broad applicability of the proposed framework.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available