4.8 Article

Explaining face representation in the primate brain using different computational models

期刊

CURRENT BIOLOGY
卷 31, 期 13, 页码 2785-+

出版社

CELL PRESS
DOI: 10.1016/j.cub.2021.04.014

关键词

-

资金

  1. NIH [EY03065001]
  2. Howard Hughes Medical Institute
  3. Chen Center for Systems Neuroscience at Caltech

向作者/读者索取更多资源

Understanding how the brain represents complex objects is a central challenge in visual neuroscience. Studies on macaque face patch neurons have shown that they encode axes of a generative model for facial images, but a systematic comparison with other computational models is still lacking. Deep neural networks trained specifically on facial identification did not explain neural responses well due to factors like illumination.
Understanding how the brain represents the identity of complex objects is a central challenge of visual neuroscience. The principles governing object processing have been extensively studied in the macaque face patch system, a sub-network of inferotemporal (IT) cortex specialized for face processing. A previous study reported that single face patch neurons encode axes of a generative model called the active appearancemodel, which transforms 50D feature vectors separately representing facial shape and facial texture into facial images. However, a systematic investigation comparing this model to other computational models, especially convolutional neural network models that have shown success in explaining neural responses in the ventral visual stream, has been lacking. Here, we recorded responses of cells in the most anterior face patch anterior medial (AM) to a large set of real face images and compared a large number of models for explaining neural responses. We found that the active appearance model better explained responses than any other model except CORnet-Z, a feedforward deep neural network trained on general object classification to classify non-face images, whose performance it tied on some face image sets and exceeded on others. Surprisingly, deep neural networks trained specifically on facial identification did not explain neural responses well. A major reason is that units in the network, unlike neurons, are less modulated by face related factors unrelated to facial identification, such as illumination.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据