3.8 Proceedings Paper

Contrastive Embedding for Generalized Zero-Shot Learning

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/CVPR46437.2021.00240

Keywords

-

Funding

  1. National Science Foundation of China [U1713208, 61876085]
  2. China Postdoctoral Science Foundation [2017M621748, 2020M681606, 2019T120430]

Ask authors/readers for more resources

The study proposes a hybrid GZSL framework that integrates generation model with embedding model, and utilizes contrastive embedding for GZSL classification, achieving significant improvement in performance.
Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes, when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL classification since it lacks discriminative information. To tackle this issue, we propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework. The hybrid GZSL approach maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a contrastive embedding (CE) for our hybrid GZSL framework. The proposed contrastive embedding can leverage not only the class-wise supervision but also the instance-wise supervision, where the latter is usually neglected by existing GZSL researches. We evaluate our proposed hybrid GZSL framework with contrastive embedding, named CE-GZSL, on five benchmark datasets. The results show that our CE-GZSL method can outperform the state-of-the-arts by a significant margin on three datasets.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available