4.7 Article

The C-loss function for pattern classification

期刊

PATTERN RECOGNITION
卷 47, 期 1, 页码 441-453

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2013.07.017

关键词

Classification; Correntropy; Neural network; Loss function; Backprojection

资金

  1. Joan and Lalit Bahl Fellowship
  2. Computational Science and Engineering Fellowship at the University of Illinois at Urbana-Champaign
  3. NSF [ECCS 0856441]

向作者/读者索取更多资源

This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L-o or counting norm for samples that are outliers or are difficult to classify. Depending on the value of the kernel size parameter, the proposed loss function moves smoothly from convex to non-convex and becomes a close approximation to the misclassification loss (ideal 0-1 loss). We show that the discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0-1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. The results also show that it is a close competitor to the SVM. Since the proposed method is compatible with simple gradient based online learning, it is a practical way of improving the performance of neural network classifiers. (C) 2013 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据