4.7 Article

Matrix-wise l0-constrained sparse nonnegative least squares

Journal

MACHINE LEARNING
Volume 111, Issue 12, Pages 4453-4495

Publisher

SPRINGER
DOI: 10.1007/s10994-022-06260-2

Keywords

Nonnegative least squares; Sparsity; Nonnegative matrix factorization

Funding

  1. European Research Council (ERC) [679515]
  2. Fonds de la Recherche Scientifique - FNRS
  3. Fonds Wetenschappelijk Onderzoek - Vlanderen (FWO) under EOS [O005318F-RG47]
  4. ANR Grant ANR JCJC [LoRAiA ANR-20-CE23-0010]

Ask authors/readers for more resources

This paper introduces a new form of sparse MNNLS problem and a two-step algorithm to solve it. By dividing the problem into subproblems and selecting Pareto front solutions, a matrix that satisfies the sparsity constraint is constructed. Experimental results show that this method is more accurate than existing heuristic algorithms.
Nonnegative least squares problems with multiple right-hand sides (MNNLS) arise in models that rely on additive linear combinations. In particular, they are at the core of most nonnegative matrix factorization algorithms and have many applications. The nonnegativity constraint is known to naturally favor sparsity, that is, solutions with few nonzero entries. However, it is often useful to further enhance this sparsity, as it improves the interpretability of the results and helps reducing noise, which leads to the sparse MNNLS problem. In this paper, as opposed to most previous works that enforce sparsity column- or row-wise, we first introduce a novel formulation for sparse MNNLS, with a matrix-wise sparsity constraint. Then, we present a two-step algorithm to tackle this problem. The first step divides sparse MNNLS in subproblems, one per column of the original problem. It then uses different algorithms to produce, either exactly or approximately, a Pareto front for each subproblem, that is, to produce a set of solutions representing different tradeoffs between reconstruction error and sparsity. The second step selects solutions among these Pareto fronts in order to build a sparsity-constrained matrix that minimizes the reconstruction error. We perform experiments on facial and hyperspectral images, and we show that our proposed two-step approach provides more accurate results than state-of-the-art sparse coding heuristics applied both column-wise and globally.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available