4.1 Article

RenderGAN: Generating Realistic Labeled Data

Journal

FRONTIERS IN ROBOTICS AND AI
Volume 5, Issue -, Pages -

Publisher

FRONTIERS MEDIA SA
DOI: 10.3389/frobt.2018.00066

Keywords

generative adversarial networks; unsupervised learning; social insects; markers; deep learning

Categories

Funding

  1. Open Access Publication Fund of the Freie Universitat Berlin
  2. Northgerman Supercomputing Alliance (HLRN) [beb00002]

Ask authors/readers for more resources

Deep Convolutional Neuronal Networks (DCNNs) are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g., lighting, background, and detail) are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.1
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available