Journal
MULTIMEDIA TOOLS AND APPLICATIONS
Volume 78, Issue 17, Pages 24147-24165Publisher
SPRINGER
DOI: 10.1007/s11042-018-6842-3
Keywords
Zero-shot learning; Zero-shot hashing; Visual similes; Binary annotation
Categories
Funding
- National Natural Science Foundation of China [61872187]
- Major Special Project of Core Electronic Devices, High-end Generic Chips and Basic Software [2015ZX01041101]
- MRC [MR/S003916/1] Funding Source: UKRI
Ask authors/readers for more resources
Conventional zero-shot learning methods usually learn mapping functions to project image features into semantic embedding spaces, in which to find the nearest neighbors with predefined attributes. The predefined attributes including both seen classes and unseen classes are often annotated with high dimensional real values by experts, which costs a lot of human labors. In this paper, we propose a simple but effective method to reduce the annotation work. In our strategy, only unseen classes are needed to be annotated with several binary codes, which lead to only about one percent of original annotation work. In addition, we design a Visual Similes Annotation System (ViSAS) to annotate the unseen classes, and build both linear and deep mapping models and test them on four popular datasets, the experimental results show that our method can outperform the state-of-the-art methods in most circumstances.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available