4.7 Article

Learning CNN Filters From User-Drawn Image Markers for Coconut-Tree Image Classification

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/LGRS.2020.3020098

关键词

Training; Feature extraction; Vegetation; Backpropagation; Convolution; Monitoring; Manuals; Design of convolutional neural networks (CNNs); interactive machine learning; remote sensing image analysis

资金

  1. Sao Paulo Research Foundation (FAPESP) [2014/12236-1]
  2. National Council for Scientific and Technological Development (CNPq) [303808/2018-7]
  3. Petroleo Brasileiro S.A. (PETROBRAS)
  4. Agencia Nacional do Petroleo, Gas Natural e Biocombustiveis (ANP) [4600556376, 4600583791]

向作者/读者索取更多资源

Identifying tree species in aerial images is crucial for land-use classification, plantation monitoring, and disaster impact assessment. Traditional manual identification methods are laborious, costly, and error-prone, necessitating the need for automatic classification methods. We propose a method that minimizes the training images required for a CNN's feature extractor by using user-selected images and learning filters from user-drawn markers, providing better user control and understanding. The method demonstrates advantages over popular CNN models in binary classification of coconut-tree aerial images.
Identifying species of trees in aerial images is essential for land-use classification, plantation monitoring, and impact assessment of natural disasters. The manual identification of trees in aerial images is tedious, costly, and error-prone, so automatic classification methods are necessary. Convolutional neural network (CNN) models have well succeeded in image classification applications from different domains. However, CNN models usually require intensive manual annotation to create large training sets. One may conceptually divide a CNN into convolutional layers for feature extraction and fully connected layers for feature space reduction and classification. We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor, reducing the number of required images to train the fully connected layers. The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes, allowing better user control and understanding of the training process. It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images against one of the most popular CNN models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据