3.8 Proceedings Paper

On Post Selection Using Test Sets (PSUTS) in AI

向作者/读者索取更多资源

This theory paper discusses the unethical practice of Post Selection Using Test Sets (PSUTS) in Artificial Intelligence (AI) and its implications for error-backpropagation methods in deep learning. It categorizes AI methods into two schools, connectionist and symbolic, and distinguishes between machine PSUTS and human PSUTS practices. The paper also proposes a new standard for AI metrology to improve transparency in future publications.
This is a theory paper. It first raises a rarely reported but unethical practice in Artificial Intelligence (AI) called Post Selection Using Test Sets (PSUTS). Consequently, the popular error-backprop methodology in deep learning lacks an acceptable generalization power. All AI methods fall into two broad schools, connectionist and symbolic. PSUTS practices have two kinds, machine PSUTS and human PSUTS. The connectionist school received criticisms for its scruffiness due to a huge number of scruffy parameters and now the machine PSUTS; but the seemingly clean symbolic school seems more brittle than what is known because of using human PSUTS. This paper formally defines what PSUTS is, analyzes why error-backprop methods with random initial weights suffer from severe local minima, why PSUTS violates well-established research ethics, and how every paper that used PSUTS should have at least transparently reported PSUTS data. For improved transparency in future publications, this paper proposes a new standard for AI metrology, called developmental errors for all networks trained in a project that the selection of the luckiest network depends on, along with Three Conditions: (1) system restrictions, (2) training experience and (3) computational resources.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

3.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据