4.7 Article

Toward Optimal Feature Selection in Naive Bayes for Text Categorization

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TKDE.2016.2563436

关键词

Feature selection; feature reduction; text categorization; Kullback-Leibler divergence; Jeffreys divergence; information gain

资金

  1. US National Science Foundation [ECCS 1053717, CCF 1439011]
  2. Army Research Office [W911NF-12-1-0378]
  3. Div Of Electrical, Commun & Cyber Sys
  4. Directorate For Engineering [1053717] Funding Source: National Science Foundation

向作者/读者索取更多资源

Automated feature selection is important for text categorization to reduce feature size and to speed up learning process of classifiers. In this paper, we present a novel and efficient feature selection framework based on the Information Theory, which aims to rank the features with their discriminative capacity for classification. We first revisit two information measures: Kullback-Leibler divergence and Jeffreys divergence for binary hypothesis testing, and analyze their asymptotic properties relating to type I and type II errors of a Bayesian classifier. We then introduce a new divergence measure, called Jeffreys-Multi-Hypothesis (JMH) divergence, to measure multi-distribution divergence for multi-class classification. Based on the JMH-divergence, we develop two efficient feature selection methods, termed maximum discrimination (MD) and MD - chi(2) methods, for text categorization. The promising results of extensive experiments demonstrate the effectiveness of the proposed approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据