4.5 Article

Learning from imprecise and fuzzy observations: Data disambiguation through generalized loss minimization

期刊

INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
卷 55, 期 7, 页码 1519-1534

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ijar.2013.09.003

关键词

Imprecise data; Fuzzy sets; Machine learning; Extension principle; Data disambiguation; Loss function

向作者/读者索取更多资源

Methods for analyzing or learning from fuzzy data have attracted increasing attention in recent years. In many cases, however, existing methods (for precise, non-fuzzy data) are extended to the fuzzy case in an ad-hoc manner, and without carefully considering the interpretation of a fuzzy set when being used for modeling data. Distinguishing between an ontic and an epistemic interpretation of fuzzy set-valued data, and focusing on the latter, we argue that a fuzzification of learning algorithms based on an, application of the generic extension principle is not appropriate. In fact, the extension principle fails to properly exploit the inductive bias underlying statistical and machine learning methods, although this bias, at least in principle, offers a means for disambiguating the fuzzy data. Alternatively, we therefore propose a method which is based on the generalization of loss functions in empirical risk minimization, and which performs model identification and data disambiguation simultaneously. Elaborating on the fuzzification of specific types of losses, we establish connections to well-known loss functions in regression and classification. We compare our approach with related methods and illustrate its use in logistic regression for binary classification. (C) 2013 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据