4.5 Article

Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?

Journal

MINDS AND MACHINES
Volume 33, Issue 3, Pages 507-524

Publisher

SPRINGER
DOI: 10.1007/s11023-023-09642-0

Keywords

Deep learning; Concepts; Object identity; Objective representation; Semantic segmentation; Similarity semantics; Content identity; Language of thought; Phenomenology

Ask authors/readers for more resources

This paper discusses the representations of deep convolutional neural networks and argues that supplementation by Quine's apparatus is necessary to achieve concepts and represent objects. It also proposes a Fodorian hybrid model based on statistical learning to overcome regress and circularity and achieve objective representation.
The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339-5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine's apparatus of identity and quantification in order to (1) achieve concepts, and (2) represent objects, as opposed to half-entities corresponding to similarity amalgams (Quine, Quintessence, Cambridge, 2004, p. 107). Similarity amalgams are also called approximate meaning[s] (Marcus & Davis, Rebooting AI, Pantheon, 2019, p. 132). Although Husserl inferred the complete abandonment of the empiricist theory of abstraction (a fortiori deep CNNs) due to the infinite regress and circularity arguments examined in this paper, I argue that the statistical learning of deep CNNs may be incorporated into a Fodorian hybrid account that supports Quine's sortal predicates, negation, plurals, identity, pronouns, and quantifiers which are representationally necessary to overcome the regress/circularity in content constitution and achieve objective (as opposed to similarity-subjective) representation (Burge, Origins of Objectivity. Oxford, 2010, p. 238). I base myself initially on Yoshimi's (New Frontiers in Psychology, 2011) attempt to explain Husserlian phenomenology with neural networks but depart from him due to the arguments and consequently propose a two-system view which converges with Weiskopf's proposal (Observational Concepts. The Conceptual Mind. MIT, 2015. 223-248).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available