4.7 Article

LRR-Net: An Interpretable Deep Unfolding Network for Hyperspectral Anomaly Detection

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2023.3279834

Keywords

Hyperspectral imaging; Feature extraction; Deep learning; Optimization; Anomaly detection; Generative adversarial networks; Training; Alternating direction method of multipliers (ADMM); anomaly detection; artificial intelligence; deep unfolding; hyperspectral image; interpretability; low-rank representation (LRR); sparse representation

Ask authors/readers for more resources

This article proposes a new baseline network called LRR-Net for hyperspectral anomaly detection (HAD) which combines the low-rank representation (LRR) model with deep learning techniques to overcome the limitations of manual parameter selection and subpar generalization performance. The proposed approach demonstrates its efficacy and superiority compared to state-of-the-art methods through empirical evaluations on eight distinct datasets. The scalability of the LRR-Net framework is also demonstrated through sparse neural network embedding.
Considerable endeavors have been expended toward enhancing the representation performance for hyperspectral anomaly detection (HAD) through physical model-based methods and recent deep learning-based approaches. Of these methods, the low-rank representation (LRR) model is widely adopted for its formidable separation capabilities for background and target features, however, its practical applications are limited due to the reliance on manual parameter selection and subpar generalization performance. To this end, this article presents a new HAD baseline network, referred to as LRR-Net, which synergizes the LRR model with deep learning techniques. LRR-Net leverages the alternating direction method of multipliers (ADMM) optimizer to solve the LRR model efficiently and incorporates the solution as prior knowledge into the deep network to guide the optimization of parameters. Moreover, LRR-Net transforms the regularized parameters into trainable parameters of the deep neural network, thus alleviating the need for manual parameter tuning. Additionally, this article proposes a sparse neural network embedding to demonstrate the scalability of the LRR-Net framework. Empirical evaluations on eight distinct datasets illustrate the efficacy and superiority of the proposed approach compared to the state-of-the-art methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available