期刊
INFORMATION SCIENCES
卷 555, 期 -, 页码 260-279出版社
ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2020.09.058
关键词
Convolutional neural networks; Transfer learning; Texture analysis
资金
- National Council for the Improvement of Higher Education (CAPES)
- FONDECYT, an initiative of the National Council of Science, Technology and Technological Innovation-CONCYTEC (Peru)
- CNPq (National Council for Scientific and Technological Development, Brazil) [307797/2014-7, 484312/2013-8]
- FAPESP [14/08026-1, 16/18809-9]
- Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) [16/18809-9] Funding Source: FAPESP
The study examined the effects of global pooling measurements on extracting texture information, finding that the RANKGP-CNN and RauxGP-CNN methods are effective in extracting high-quality texture information and performing well on different texture problems.
We analyzed the effects of global pooling measurements on extracting relevant texture information from a given set of activation maps. Initially, using a layer-by-layer approach (GP-CNN), we experimentally demonstrated that layers at various depth levels could provide high-quality texture information. Based on this finding, we developed RANKGP-CNN, a method that performs multi-layer feature extraction. More specifically, RANKGP-CNN treats every CNN model as a vast collection of deep composite functions, where each function computes a 2D activation map for every input image. A feature ranking approach then assigns a score to each deep composite function by processing the activation maps generated for a particular dataset bank. Eventually, RauxGP-CNN uses the top-ranked deep composite functions to compute feature vectors for different texture datasets. Experiments on a dedicated classifier showed that RauxGP-CNN achieves good results and can adapt to different texture problems. Finally, we present RauxGP-3M-CNN as the version of RANKGP-CNN that considers multiple CNN models. Overall, RaNKGP-3M-CNN achieves promising results with the advantage of only using the default scale of the input images. (C) 2020 Published by Elsevier Inc.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据