4.7 Article

Multiple-instance ensemble for construction of deep heterogeneous committees for high-dimensional low-sample-size data

期刊

NEURAL NETWORKS
卷 167, 期 -, 页码 380-399

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.08.028

关键词

Committee learning; Deep learning; HDLS; Attention

向作者/读者索取更多资源

Deep ensemble learning, specifically multiple instance ensemble (MIE), is introduced as a novel stacking method for improving the performance of neural networks. By reformulating the ensemble learning process as a multiple-instance learning problem, MIE associates feature representations of base neural networks into joint representations using pooling operations. This study explores attention mechanisms and proposes new committee learning strategies with MIE. The capability of MIE to generate pseudo-base neural networks enables the creation of growing cascades. Experimental results on multiple HDLS datasets demonstrate the high performance of the proposed approach in low-sample size regime for binary classification tasks.
Deep ensemble learning, where we combine knowledge learned from multiple individual neural networks, has been widely adopted to improve the performance of neural networks in deep learning. This field can be encompassed by committee learning, which includes the construction of neural network cascades. This study focuses on the high-dimensional low-sample-size (HDLS) domain and introduces multiple instance ensemble (MIE) as a novel stacking method for ensembles and cascades. In this study, our proposed approach reformulates the ensemble learning process as a multiple-instance learning problem. We utilise the multiple-instance learning solution of pooling operations to associate feature representations of base neural networks into joint representations as a method of stacking. This study explores various attention mechanisms and proposes two novel committee learning strategies with MIE. In addition, we utilise the capability of MIE to generate pseudo-base neural networks to provide a proof-of-concept for a growing'' neural network cascade that is unbounded by the number of base neural networks. We have shown that our approach provides (1) a class of alternative ensemble methods that performs comparably with various stacking ensemble methods and (2) a novel method for the generation of high-performing growing'' cascades. The approach has also been verified across multiple HDLS datasets, achieving high performance for binary classification tasks in the low-sample size regime. (c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据