4.7 Article

Stochastic separation theorems

期刊

NEURAL NETWORKS
卷 94, 期 -, 页码 255-259

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2017.07.014

关键词

Fisher's discriminant; Random set; Measure concentration; Linear separability; Machine learning; Extreme point

资金

  1. Innovate UK [KTP009890, KTP010522]

向作者/读者索取更多资源

The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises in all Artificial Intelligence applications in the real world. Its solution requires robust separation of samples with errors from samples where the system works properly. We demonstrate that in (moderately) high dimension this separation could be achieved with probability close to one by linear discriminants. Based on fundamental properties of measure concentration, we show that for M < a exp(bn) random M-element sets in R-n are linearly separable with probability p, p > 1-upsilon, where 1 > upsilon > 0 is a given small constant. Exact values of a, b > 0 depend on the probability distribution that determines how the random M-element sets are drawn, and on the constant upsilon. These stochastic separation theorems provide a new instrument for the development, analysis, and assessment of machine learning methods and algorithms in high dimension. Theoretical statements are illustrated with numerical examples. (C) 2017 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据