Journal
NEUROCOMPUTING
Volume 195, Issue -, Pages 143-148Publisher
ELSEVIER
DOI: 10.1016/j.neucom.2015.08.112
Keywords
Classification; kNN; Big data; Data cluster; Cluster center
Categories
Funding
- China 973 Program [2013CB329404]
- National Natural Science Foundation of China [61450001, 61263035, 61573270]
- Guangxi Natural Science Foundation [2012GXNSFGA060004, 2015GXNSFCB139011]
- China Postdoctoral Science Foundation [2015M57570837]
- Guangxi 100 Plan
- Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing
- Guangxi Bagui Scholar Teams for Innovation and Research Project
Ask authors/readers for more resources
K nearest neighbors (kNN) is an efficient lazy learning algorithm and has successfully been developed in real applications. It is natural to scale the kNN method to the large scale datasets. In this paper, we propose to first conduct a k-means clustering to separate the whole dataset into several parts, each of which is then conducted kNN classification. We conduct sets of experiments on big data and medical imaging data. The experimental results show that the proposed kNN classification works well in terms of accuracy and efficiency. (C) 2016 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available