4.7 Article

Semi-Supervised Low-Rank Semantics Grouping for Zero-Shot Learning

Journal

IEEE TRANSACTIONS ON IMAGE PROCESSING
Volume 30, Issue -, Pages 2207-2219

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIP.2021.3050677

Keywords

Semantics; Visualization; Training; Learning systems; Correlation; Task analysis; Laplace equations; Zero-shot learning; semi-supervised; low-rank semantic grouping; label propagation

Funding

  1. National Natural Science Foundation of China [U1913602, 61936004, 61876219]
  2. Innovation Group Project of the National Natural Science Foundation of China [61821003]
  3. Technology Innovation Project of Hubei Province of China [2019AEA171]
  4. Foundation for Innovative Research Groups of Hubei Province of China [2017CFA005]
  5. 111 Project on Computational Intelligence and Intelligent Control [B18024]

Ask authors/readers for more resources

Zero-shot learning aims to classify new classes based on a model learned from observed classes. This study proposes a Low-rank Semantics Grouping method for semi-supervised zero-shot learning, which has been shown to be efficient on various standard benchmarks.
Zero-shot learning has received great interest in visual recognition community. It aims to classify new unobserved classes based on the model learned from observed classes. Most zero-shot learning methods require pre-provided semantic attributes as the mid-level information to discover the intrinsic relationship between observed and unobserved categories. However, it is impractical to annotate the enriched label information of the observed objects in real-world applications, which would extremely hurt the performance of zero-shot learning with limited labeled seen data. To overcome this obstacle, we develop a Low-rank Semantics Grouping (LSG) method for zero-shot learning in a semi-supervised fashion, which attempts to jointly uncover the intrinsic relationship across visual and semantic information and recover the missing label information from seen classes. Specifically, the visual-semantic encoder is utilized as projection model, low-rank semantic grouping scheme is explored to capture the intrinsic attributes correlations and a Laplacian graph is constructed from the visual features to guide the label propagation from labeled instances to unlabeled ones. Experiments have been conducted on several standard zero-shot learning benchmarks, which demonstrate the efficiency of the proposed method by comparing with state-of-the-art methods. Our model is robust to different levels of missing label settings. Also visualized results prove that the LSG can distinguish the test unseen classes more discriminative.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available