4.5 Article

A selective neural network ensemble classification for incomplete data

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s13042-016-0524-0

关键词

Incomplete data; Neural network ensemble; Mutual information; Feature subset

资金

  1. National Natural Science Foundation of China [61175046, 61203290]
  2. Natural Science Foundation of Anhui Province [1408085MF132]

向作者/读者索取更多资源

Neural network ensemble (NNE) is a simple and effective method to deal with incomplete data for classification. However, with the increase in the number of missing values, the number of incomplete feature combinations (feature subsets) grown rapidly which makes the NNE method very time-consuming and the accuracy is also need to be improved. In this paper, we propose a selective neural network ensemble (SNNE) classification for incomplete data. The SNNE first obtains all the available feature subsets of the incomplete dataset and then applies mutual information to measure the importance (relevance) degree of each feature subset. After that, an optimization process is applied to remove the feature subsets by satisfying the following condition: there is at least a feature subset contained in the removed feature subset and the difference of their importance degree is smaller than a given threshold delta. Finally, the rest of the feature subsets were used to train a group of neural networks and the classification for a given sample is decided by weighted majority voting of all available components in the ensemble. Experimental results show that delta = 0.05 is reasonable in our study. It can improve the efficiency of the algorithm without loss the algorithm accuracy. Experiments also show that SNNE outperforms the NNE-based algorithms compared. In addition, it can greatly reduce the running time when dealing with datasets with larger number of missing values.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据