4.7 Article

Regularization Parameter Selection in Minimum Volume Hyperspectral Unmixing

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TGRS.2019.2929776

关键词

Craig criterion; hyperspectral images (HSIs); nonconvex optimization; spectral unmixing

资金

  1. European Union's Seventh Framework Programme (FP7-PEOPLE-2013-ITN) [607290 SpaRTaN]
  2. Portuguese Foundation for Science and Technology/Ministry of Education and Science (FCT/MEC)
  3. European Regional Development Fund (FEDER), within the Portugal 2020 (PT-2020) Partnership Agreement [UID/EEA/50008/2019]
  4. Young Scholar Fellowship Program (Einstein Program) of Ministry of Science and Technology (MOST), Taiwan [MOST107-2636-E-006-006]
  5. Higher Education Sprout Project of Ministry of Education (MOE)

向作者/读者索取更多资源

Linear hyperspectral unmixing (HU) aims at factoring the observation matrix into an endmember matrix abundance matrix. Linear HU via variational minimum volume (MV) regularization has recently received considerable attention in the remote sensing and machine learning areas, mainly owing to its robustness against the absence of pure pixels. We put some popular linear HU formulations under a unifying framework, which involves a data-fitting term and an MV-based regularization term, and collectively solve it via a nonconvex optimization. As the former and the latter terms tend, respectively, to expand (reducing the data-fitting errors) and to shrink the simplex enclosing the measured spectra, it is critical to strike a balance between those two terms. To the best of our knowledge, the existing methods find such balance by tuning a regularization parameter manually, which has little value in unsupervised scenarios. In this paper, we aim at selecting the regularization parameter automatically by exploiting the fact that a too large parameter overshrinks the volume of the simplex defined by the endmembers, making many data points be left outside of the simplex and hence inducing a large data-fitting error, while a sufficiently small parameter yields a large simplex making data-fitting error very small. Roughly speaking, the transition point happens when the simplex still encloses the data cloud but there are data points on all its facets. These observations are systematically formulated to find the transition point that, in turn, yields a good parameter. The competitiveness of the proposed selection criterion is illustrated with simulated and real data.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据