4.7 Article

Neural random subspace

期刊

PATTERN RECOGNITION
卷 112, 期 -, 页码 -

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2020.107801

关键词

Random subspace; Ensemble learning; Deep neural networks

资金

  1. National Natural Science Foundation of China [61772256]

向作者/读者索取更多资源

The paper introduces a novel deep learning based random subspace method, NRS, which outperforms traditional forest methods with better representation learning and higher accuracy, demonstrating superior performance on machine learning datasets and achieving improvements in image and point cloud recognition tasks.
The random subspace method, also known as the pillar of random forests, is good at making precise and robust predictions. However, there is as yet no straightforward way to combine it with deep learning. In this paper, we therefore propose Neural Random Subspace (NRS), a novel deep learning based random subspace method. In contrast to previous forest methods, NRS enjoys the benefits of end-to-end, data driven representation learning, as well as pervasive support from deep learning software and hardware platforms, hence achieving faster inference speed and higher accuracy. Furthermore, as a non-linear component to be encoded into Convolutional Neural Networks (CNNs), NRS learns non-linear feature representations in CNNs more efficiently than contemporary, higher-order pooling methods, producing excellent results with negligible increase in parameters, floating point operations (FLOPs) and real running time. Compared with random subspaces, random forests and gradient boosting decision trees (GBDTs), NRS demonstrates superior performance on 35 machine learning datasets. Moreover, on both 2D image and 3D point cloud recognition tasks, integration of NRS with CNN architectures achieves consistent improvements with only incremental cost. (c) 2020 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据