4.6 Article

A Speech-Level-Based Segmented Model to Decode the Dynamic Auditory Attention States in the Competing Speaker Scenes

期刊

FRONTIERS IN NEUROSCIENCE
卷 15, 期 -, 页码 -

出版社

FRONTIERS MEDIA SA
DOI: 10.3389/fnins.2021.760611

关键词

auditory attention decoding; speech-RMS-level segments; auditory attention switching; temporal response function; EEG signals

资金

  1. National Natural Science Foundation of China [61971212]
  2. Shenzhen Sustainable Support Program for High-level University [20200925154002001, G02236002]
  3. Southern University of Science and Technology

向作者/读者索取更多资源

This study investigated the effect of RMS-level-based speech segmentation on auditory attention decoding (AAD) performance in competing speaker auditory scenes. The segmented AAD model showed improved decoding performance under sustained and switched auditory attention modulations, and TRF weight and AAD accuracy were effective indicators of auditory attention changes.
In the competing speaker environments, human listeners need to focus or switch their auditory attention according to dynamic intentions. The reliable cortical tracking ability to the speech envelope is an effective feature for decoding the target speech from the neural signals. Moreover, previous studies revealed that the root mean square (RMS)-level-based speech segmentation made a great contribution to the target speech perception with the modulation of sustained auditory attention. This study further investigated the effect of the RMS-level-based speech segmentation on the auditory attention decoding (AAD) performance with both sustained and switched attention in the competing speaker auditory scenes. Objective biomarkers derived from the cortical activities were also developed to index the dynamic auditory attention states. In the current study, subjects were asked to concentrate or switch their attention between two competing speaker streams. The neural responses to the higher- and lower-RMS-level speech segments were analyzed via the linear temporal response function (TRF) before and after the attention switching from one to the other speaker stream. Furthermore, the AAD performance decoded by the unified TRF decoding model was compared to that by the speech-RMS-level-based segmented decoding model with the dynamic change of the auditory attention states. The results showed that the weight of the typical TRF component approximately 100-ms time lag was sensitive to the switching of the auditory attention. Compared to the unified AAD model, the segmented AAD model improved attention decoding performance under both the sustained and switched auditory attention modulations in a wide range of signal-to-masker ratios (SMRs). In the competing speaker scenes, the TRF weight and AAD accuracy could be used as effective indicators to detect the changes of the auditory attention. In addition, with a wide range of SMRs (i.e., from 6 to -6 dB in this study), the segmented AAD model showed the robust decoding performance even with short decision window length, suggesting that this speech-RMS-level-based model has the potential to decode dynamic attention states in the realistic auditory scenarios.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据