4.6 Article

Multi-fidelity model based on synthetic minority over-sampling technique

期刊

出版社

SPRINGER
DOI: 10.1007/s11042-023-16701-2

关键词

Class imbalance; Oversampling; Synthetic Minority over-sampling TEchnique; Multi-Fidelity

向作者/读者索取更多资源

This study proposes a multi-fidelity model called MFSMOTE to improve the resolution of class imbalance problems in classification models. MFSMOTE divides the data into low-fidelity and high-fidelity groups, and utilizes prior information to train the classification model, showing promising performance.
Oversampling is a commonly employed technique to address class imbalance problems by equalizing the sizes of different data classes through the addition of the minority class data. The accuracy of the classification in the presence of class imbalance heavily relies on the quality of the generated minority class data. To ensure the quality of the input data for the classification model and enhance the stability of the oversampling approach, this study proposes a multi-fidelity model known as MFSMOTE, which is based on the synthetic minority group oversampling technique. MFSMOTE partitions the dataset into low-fidelity and high-fidelity data, enabling the interaction between them to yield a more robust classification model. Specifically, the low-fidelity data provide prior information to guide the high-fidelity data, which, in turn, is used to train the classification model. Finally, MFSMOTE is compared with 12 existing oversampling algorithms across 21 imbalanced datasets. The experimental results demonstrate that MFSMOTE serves as a promising novel oversampling method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据