4.7 Article

Decoding self-motion from visual image sequence predicts distinctive features of reflexive motor responses to visual motion

期刊

NEURAL NETWORKS
卷 162, 期 -, 页码 516-530

出版社

PERGAMON-ELSEVIER SCIENCE LTD
DOI: 10.1016/j.neunet.2023.03.020

关键词

Visual motion coding; Manual following response; Ocular following response; Visuomotor response; Spatiotemporal frequency tuning; Convolutional neural network

向作者/读者索取更多资源

Visual motion analysis is essential for humans to detect moving objects and self-motion. A neural network trained on image motion can decode self-motion during human movements and exhibits similar spatiotemporal frequency tuning as reflexive ocular and manual responses induced by visual motion.
Visual motion analysis is crucial for humans to detect external moving objects and self-motion which are informative for planning and executing actions for various interactions with environments. Here we show that the image motion analysis trained to decode the self-motion during human natural movements by a convolutional neural network exhibits similar specificities with the reflexive ocular and manual responses induced by a large-field visual motion, in terms of stimulus spatiotemporal frequency tuning. The spatiotemporal frequency tuning of the decoder peaked at high-temporal and low-spatial frequencies, as observed in the reflexive ocular and manual responses, but differed significantly from the frequency power of the visual image itself and the density distribution of self-motion. Further, artificial manipulations of the learning data sets predicted great changes in the specificity of the spatiotemporal tuning. Interestingly, despite similar spatiotemporal frequency tunings in the vertical-axis rotational direction and in the transversal direction to full-field visual stimuli, the tunings for center-masked stimuli were different between those directions, and the specificity difference is qualitatively similar to the discrepancy between ocular and manual responses, respectively. In addition, the representational analysis demonstrated that head-axis rotation was decoded by relatively simple spatial accumulation over the visual field, while the transversal motion was decoded by more complex spatial interaction of visual information. These synthetic model examinations support the idea that visual motion analyses eliciting the reflexive motor responses, which are critical in interacting with the external world, are acquired for decoding self-motion.& COPY; 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据