4.5 Article

Toward the explainability, transparency, and universality of machine learning for behavioral classification in neuroscience

期刊

CURRENT OPINION IN NEUROBIOLOGY
卷 73, 期 -, 页码 -

出版社

CURRENT BIOLOGY LTD
DOI: 10.1016/j.conb.2022.102544

关键词

-

资金

  1. NIDA [R00DA045662, P30DA048736, T32NS099578-04]
  2. NARSAD Young Investigator Award [27082]
  3. NIH [F31MH125587-01]

向作者/读者索取更多资源

The use of machine learning techniques for ethological observation has the potential to revolutionize behavioral neuroscience by improving the understanding of brain function and behavior. This approach has led to standardization, specialization, and increased explainability within the field.
The use of rigorous ethological observation via machine learning techniques to understand brain function (computational neuroethology) is a rapidly growing approach that is poised to significantly change how behavioral neuroscience is commonly performed. With the development of open-source platforms for automated tracking and behavioral recognition, these approaches are now accessible to a wide array of neuroscientists despite variations in budget and computational experience. Importantly, this adoption has moved the field toward a common understanding of behavior and brain function through the removal of manual bias and the identification of previously unknown behavioral repertoires. Although less apparent, another consequence of this movement is the introduction of analytical tools that increase the explainabilty, transparency, and universality of the machine-based behavioral classifications both within and between research groups. Here, we focus on three main applications of such machine model explainabilty tools and metrics in the drive toward behavioral (i) standardization, (ii) specialization, and (iii) explainability. We provide a perspective on the use of explainability tools in computational neuroethology, and detail why this is a necessary next step in the expansion of the field. Specifically, as a possible solution in behavioral neuroscience, we propose the use of Shapley values via Shapley Additive Explanations (SNAP) as a diagnostic resource toward explainability of human annotation, as well as supervised and unsupervised behavioral machine learning analysis.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据