4.5 Article

Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?

期刊

MINDS AND MACHINES
卷 33, 期 3, 页码 507-524

出版社

SPRINGER
DOI: 10.1007/s11023-023-09642-0

关键词

Deep learning; Concepts; Object identity; Objective representation; Semantic segmentation; Similarity semantics; Content identity; Language of thought; Phenomenology

向作者/读者索取更多资源

This paper discusses the representations of deep convolutional neural networks and argues that supplementation by Quine's apparatus is necessary to achieve concepts and represent objects. It also proposes a Fodorian hybrid model based on statistical learning to overcome regress and circularity and achieve objective representation.
The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339-5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine's apparatus of identity and quantification in order to (1) achieve concepts, and (2) represent objects, as opposed to half-entities corresponding to similarity amalgams (Quine, Quintessence, Cambridge, 2004, p. 107). Similarity amalgams are also called approximate meaning[s] (Marcus & Davis, Rebooting AI, Pantheon, 2019, p. 132). Although Husserl inferred the complete abandonment of the empiricist theory of abstraction (a fortiori deep CNNs) due to the infinite regress and circularity arguments examined in this paper, I argue that the statistical learning of deep CNNs may be incorporated into a Fodorian hybrid account that supports Quine's sortal predicates, negation, plurals, identity, pronouns, and quantifiers which are representationally necessary to overcome the regress/circularity in content constitution and achieve objective (as opposed to similarity-subjective) representation (Burge, Origins of Objectivity. Oxford, 2010, p. 238). I base myself initially on Yoshimi's (New Frontiers in Psychology, 2011) attempt to explain Husserlian phenomenology with neural networks but depart from him due to the arguments and consequently propose a two-system view which converges with Weiskopf's proposal (Observational Concepts. The Conceptual Mind. MIT, 2015. 223-248).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据