4.6 Article

Fuzzy KNN Method With Adaptive Nearest Neighbors

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 52, 期 6, 页码 5380-5393

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3031610

关键词

Decision tree; fuzzy k-nearest-neighbor method (FKNN); nearest neighbors; sparse representation/reconstruction

资金

  1. National Natural Science Foundation of China [61772198, 61972181]
  2. Natural Science Foundation of Jiangsu Province [BK20191331]
  3. National First-Class Discipline Program of Light Industry and Engineering [LITE2018]
  4. University of Macao Talent Program

向作者/读者索取更多资源

This study proposes a novel classification method based on FKNN called A-FKNN that learns the optimal k value for each testing sample, and a faster version called FA-FKNN is designed. Experimental results show that both A-FKNN and FA-FKNN outperform other methods in terms of classification accuracy, with FA-FKNN having a shorter running time.
Due to its strong performance in handling uncertain and ambiguous data, the fuzzy k-nearest-neighbor method (FKNN) has realized substantial success in a wide variety of applications. However, its classification performance would be heavily deteriorated if the number k of nearest neighbors was unsuitably fixed for each testing sample. This study examines the feasibility of using only one fixed k value for FKNN on each testing sample. A novel FKNN-based classification method, namely, fuzzy KNN method with adaptive nearest neighbors (A-FKNN), is devised for learning a distinct optimal k value for each testing sample. In the training stage, after applying a sparse representation method on all training samples for reconstruction, A-FKNN learns the optimal k value for each training sample and builds a decision tree (namely, A-FKNN tree) from all training samples with new labels (the learned optimal k values instead of the original labels), in which each leaf node stores the corresponding optimal k value. In the testing stage, A-FKNN identifies the optimal k value for each testing sample by searching the A-FKNN tree and runs FKNN with the optimal k value for each testing sample. Moreover, a fast version of A-FKNN, namely, FA-FKNN, is designed by building the FA-FKNN decision tree, which stores the optimal k value with only a subset of training samples in each leaf node. Experimental results on 32 UCI datasets demonstrate that both A-FKNN and FA-FKNN outperform the compared methods in terms of classification accuracy, and FA-FKNN has a shorter running time.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据