4.7 Article

RodNet: An Advanced Multidomain Object Detection Approach Using Feature Transformation With Generative Adversarial Networks

Journal

IEEE SENSORS JOURNAL
Volume 23, Issue 15, Pages 17531-17540

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSEN.2023.3281399

Keywords

Feature extraction; Object detection; Generators; Training; Generative adversarial networks; Detectors; Task analysis; Deep learning; generative adversarial networks (GANs); low luminance; object detection (OD)

Ask authors/readers for more resources

Advanced object detection techniques have been widely studied and successfully applied in real-world applications. However, they face challenges in nighttime image detection, especially in low-luminance conditions. In this study, a lightweight framework using generative adversarial networks (GANs) is proposed for multidomain object detection, which includes feature domain transformation and a training policy to achieve luminance-invariant feature extraction. The proposed method outperforms existing algorithms with a 9.95% improvement in average precision, without incurring additional computational costs.
Advanced object detection (OD) techniques have been widely studied in recent years and have been successfully applied in real-world applications. However, existing algorithms may struggle with nighttime image detection, especially in low-luminance conditions. Researchers have attempted to overcome this issue by collecting large amounts of multidomain data, but performance remains poor because these methods train images from both low- and sufficient-luminance domains without a specific training policy. In this work, we present a lightweight framework for multidomain OD using feature domain transformation with generative adversarial networks (GANs). The proposed GAN framework trains a generator network to transform features from the low-luminance domain to a sufficient-luminance domain, making the discriminator networks unable to distinguish whether the features were generated from a low-luminance or a normal image and thus achieving luminance-invariant feature extraction. To preserve semantic meaning in the transformed features, a training policy has been introduced for OD and feature transformation in various domains. The proposed method achieves the state-of-the-art performance with a 9.95 improvement in average precision without incurring additional computational costs.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available