4.5 Article

L1-Norm Robust Regularized Extreme Learning Machine with Asymmetric C-Loss for Regression

期刊

AXIOMS
卷 12, 期 2, 页码 -

出版社

MDPI
DOI: 10.3390/axioms12020204

关键词

extreme learning machine; asymmetric least square loss; expectile; correntropy; robustness

向作者/读者索取更多资源

In this paper, a novel extreme learning machine algorithm called L-1-ACELM is proposed to address the overfitting problem. The algorithm benefits from L-1 norm and replaces the square loss function with the AC-loss function, which is non-convex, bounded, and relatively insensitive to noise. Experimental results show that L-1-ACELM achieves better generalization performance compared to other state-of-the-art algorithms, especially in the presence of noise.
Extreme learning machines (ELMs) have recently attracted significant attention due to their fast training speeds and good prediction effect. However, ELMs ignore the inherent distribution of the original samples, and they are prone to overfitting, which fails at achieving good generalization performance. In this paper, based on expectile penalty and correntropy, an asymmetric C-loss function (called AC-loss) is proposed, which is non-convex, bounded, and relatively insensitive to noise. Further, a novel extreme learning machine called L-1 norm robust regularized extreme learning machine with asymmetric C-loss (L-1-ACELM) is presented to handle the overfitting problem. The proposed algorithm benefits from L-1 norm and replaces the square loss function with the AC-loss function. The L-1-ACELM can generate a more compact network with fewer hidden nodes and reduce the impact of noise. To evaluate the effectiveness of the proposed algorithm on noisy datasets, different levels of noise are added in numerical experiments. The results for different types of artificial and benchmark datasets demonstrate that L-1-ACELM achieves better generalization performance compared to other state-of-the-art algorithms, especially when noise exists in the datasets.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据