4.3 Article

Beyond Human: Deep Learning, Explainability and Representation

Journal

THEORY CULTURE & SOCIETY
Volume 38, Issue 7-8, Pages 55-77

Publisher

SAGE PUBLICATIONS LTD
DOI: 10.1177/0263276420966386

Keywords

algorithmic thought; deep neural networks; explanation; incommensurability; interpretability; philosophy; XAI

Ask authors/readers for more resources

This article discusses the opacity of deep learning and artificial intelligence technologies, challenges related abstract operations, and reconsiders the explainability of these technologies. The author analyzes how technoscience and technoculture re-present algorithmic procedures, and addresses explainability using philosophical concepts.
This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of 'algorithmic thought'. Research in deep learning is its case study. This artificial intelligence (AI) technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to 're-present' the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability (originally developed in the philosophy of science) to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic 'thinking' operations.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.3
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available