4.5 Article

SMOTE for Learning from Imbalanced Data: Progress and Challenges, Marking the 15-year Anniversary

期刊

JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
卷 61, 期 -, 页码 863-905

出版社

AI ACCESS FOUNDATION
DOI: 10.1613/jair.1.11192

关键词

-

资金

  1. Spanish Ministry of Science and Technology [TIN2014-57251-P, TIN2015-68454-R, TIN2017-89517-P]
  2. Project BigDaP-TOOLS - Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica
  3. National Science Foundation (NSF) [IIS-1447795]

向作者/读者索取更多资源

The Synthetic Minority Oversampling Technique (SMOTE) preprocessing algorithm is considered de facto standard in the framework of learning from imbalanced data. This is due to its simplicity in the design of the procedure, as well as its robustness when applied to different type of problems. Since its publication in 2002, SMOTE has proven successful in a variety of applications from several different domains. SMOTE has also inspired several approaches to counter the issue of class imbalance, and has also significantly contributed to new supervised learning paradigms, including multilabel classification, incremental learning, semi-supervised learning, multi-instance learning, among others. It is standard benchmark for learning from imbalanced data. It is also featured in a number of different software packages - from open source to commercial. In this paper, marking the fifteen year anniversary of SMOTE, we reflect on the SMOTE journey, discuss the current state of affairs with SMOTE, its applications, and also identify the next set of challenges to extend SMOTE for Big Data problems.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据