4.5 Article

A General Framework for Class Label Specific Mutual Information Feature Selection Method

Journal

IEEE TRANSACTIONS ON INFORMATION THEORY
Volume 68, Issue 12, Pages 7996-8014

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIT.2022.3188708

Keywords

Mutual information; Feature extraction; Redundancy; Entropy; Magnetic resonance imaging; Information filters; Correlation; Feature selection; filter method; information theory; class label specific mutual information; classification

Ask authors/readers for more resources

This article introduces a new feature selection method called class-label specific mutual information (CSMI), which selects a specific subset of features for each class label. The proposed method maximizes the information shared among the selected features and the target class label while minimizing the same with all classes. Experimental results show that the CSMI method outperforms traditional and state-of-the-art ITFS methods in multiple benchmark datasets.
Information theory-based feature selection (ITFS) methods select a single subset of features for all classes based on the following criteria: 1) minimizing redundancy between the selected features and 2) maximizing classification information of the selected features with the classes. A critical issue with selecting a single subset of features is that they may not represent the feature space in which individual class labels can be separated exclusively. Existing methods fail to provide a way to select the feature space specific to the individual class label. To this end, we propose a novel feature selection method called class-label specific mutual information (CSMI) that selects a specific set of features for each class label. The proposed method maximizes the information shared among the selected features and target class label but minimizes the same with all classes. We also consider the dynamic change of information between selected features and the target class label when a candidate feature is added. Finally, we provide a general framework for the CSMI to make it classifier-independent. We perform experiments on sixteen benchmark data sets using four classifiers and found that the CSMI outperforms five traditional, two state-of-the-art ITFS (multi-class classification), and one multi-label classification methods.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available