3.8 Proceedings Paper

Low-Shot Learning from Imaginary Data

Publisher

IEEE
DOI: 10.1109/CVPR.2018.00760

Keywords

-

Funding

  1. ONR MURI [N000141612007]
  2. U.S. Army Research Laboratory (ARL) under the Collaborative Technology Alliance Program [W911NF-10-2-0016]

Ask authors/readers for more resources

Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea. Our approach builds on recent progress in meta-learning (learning to learn) by combining a meta-learner with a hallucinator that produces additional training examples, and optimizing both models jointly. Our hallucinator can be incorporated into a variety of meta-learners and provides significant gains: up to a 6 point boost in classification accuracy when only a single training example is available, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available