4.6 Article

Hidden representations in deep neural networks: Part 1. Classification problems

期刊

COMPUTERS & CHEMICAL ENGINEERING
卷 134, 期 -, 页码 -

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.compchemeng.2019.106669

关键词

Deep neural network; Classification; Fault diagnosis; Feature space

向作者/读者索取更多资源

Deep neural networks have evolved into a powerful tool applicable for a wide range of problems. However, a clear understanding of their internal mechanism has not been developed satisfactorily yet. Factors such as the architecture, number of hidden layers and neurons, and activation function are largely determined in a guess-and-test manner that is reminiscent of alchemy more than of chemistry. In this paper, we attempt to address these concerns systematically using carefully chosen model systems to gain insights for classification problems. We show how wider networks result in several simple patterns identified on the input space, while deeper networks result in more complex patterns. We show also the transformation of input space by each layer and identify the origin of techniques such as transfer learning, weight normalization and early stopping. This paper is an initial step towards a systematic approach to uncover key hidden properties that can be exploited to improve the performance and understanding of deep neural networks. (C) 2019 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据