期刊
IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
卷 19, 期 3, 页码 1817-1826出版社
IEEE COMPUTER SOC
DOI: 10.1109/TCBB.2021.3056683
关键词
Diseases; Gallium nitride; Generative adversarial networks; Training; Superresolution; Generators; Viruses (medical); GAN; leaf; plant disease detection; superresolution
类别
资金
- Fundamental Research Funds for the Central Universities [2572017CB10, 2572019BF09]
- Funding of Postdoctoral Research of Heilongjiang Province of China Grant [LBH-Z16006]
The study explores the use of DoubleGAN to generate images of unhealthy plant leaves in order to balance unbalanced datasets. The WGAN is used to generate a pretrained model and the SRGAN is used to generate high-resolution images. Compared to DCGAN, the images generated by DoubleGAN are clearer and achieve higher recognition accuracy.
Plant leaves can be used to effectively detect plant diseases. However, the number of images of unhealthy leaves collected from various plants is usually unbalanced. It is difficult to detect diseases using such an unbalanced dataset. We used DoubleGAN (a double generative adversarial network) to generate images of unhealthy plant leaves to balance such datasets. We proposed using DoubleGAN to generate high-resolution images of unhealthy leaves using fewer samples. DoubleGAN is divided into two stages. In stage 1, we used healthy leaves and unhealthy leaves as inputs. First, the healthy leaf images were used as inputs for the WGAN (Wasserstein generative adversarial network) to obtain the pretrained model. Then, unhealthy leaves were used for the pretrained model to generate 64*64 pixel images of unhealthy leaves. In stage 2, a superresolution generative adversarial network (SRGAN) was used to obtain corresponding 256*256 pixel images to expand the unbalanced dataset. Finally, compared with images generated by DCGAN (Deep convolution generative adversarial network). The dataset expanded with DoubleGAN, the generated images are clearer than DCGAN, and the accuracy of plant species and disease recognition reached 99.80 and 99.53 percent, respectively. The recognition results are better than those from the original dataset.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据