4.5 Article

On the Behavior of Convolutional Nets for Feature Extraction

期刊

JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
卷 61, 期 -, 页码 563-592

出版社

AI ACCESS FOUNDATION
DOI: 10.1613/jair.5756

关键词

-

资金

  1. Joint Study Agreement under the IBM/BSC Deep Learning Center agreement [W156463]
  2. Spanish Government through Programa Severo Ochoa [SEV-2015-0493]
  3. Spanish Ministry of Science and Technology [TIN2015-65316-P]
  4. Generalitat de Catalunya [2014-SGR-1051]
  5. Core Research for Evolutional Science and Technology (CREST) program of Japan Science and Technology Agency (JST)

向作者/读者索取更多资源

Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors previously learnt by the CNN after processing millions of images, without requiring an expensive training phase. Contributions to this field (commonly known as feature representation transfer or transfer learning) have been purely empirical so far, extracting all CNN features from a single layer close to the output and testing their performance by feeding them to a classifier. This approach has provided consistent results, although its relevance is limited to classification tasks. In a completely different approach, in this paper we statistically measure the discriminative power of every single feature found within a deep CNN, when used for characterizing every class of 11 datasets. We seek to provide new insights into the behavior of CNN features, particularly the ones from convolutional layers, as this can be relevant for their application to knowledge representation and reasoning. Our results confirm that low and middle level features may behave differently to high level features, but only under certain conditions. We find that all CNN features can be used for knowledge representation purposes both by their presence or by their absence, doubling the information a single CNN feature may provide. We also study how much noise these features may include, and propose a thresholding approach to discard most of it. All these insights have a direct application to the generation of CNN embedding spaces.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据