4.6 Article

Multimodal Vigilance Estimation Using Deep Learning

期刊

IEEE TRANSACTIONS ON CYBERNETICS
卷 52, 期 5, 页码 3097-3110

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TCYB.2020.3022647

关键词

Deep learning; dimension reduction; electroencephalography (EEG); electrooculography (EOG); multimodal vigilance estimation

资金

  1. National Natural Science Foundation of China [U1813205, 61971071, 61673266, 61976135]
  2. Independent Research Project of State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body [71765003]
  3. Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing Open Foundation [2017TP1011, IRT2018009]
  4. Natural Science and Engineering Research Council (NSERC) of Canada
  5. NSERC
  6. NSERC CREATE TrustCAV
  7. National Key Research and Development Program of China [2018YFB1308200, 2017YFB1002501]
  8. Changsha Science and Technology Project [kq1907087]
  9. Hunan Key Project of Research and Development Plan [2018GK2022]
  10. Special Funding for the construction of Innovative Provinces in Hunan [2020SK3007]
  11. Fundamental Research Funds for the Central Universities
  12. 111 Project
  13. China Scholarship Council [201706130071]

向作者/读者索取更多资源

This article discusses the phenomenon of increasing accidents caused by reduced vigilance and proposes a method based on a multimodal regression network with feature fusion to improve accuracy and efficiency.
The phenomenon of increasing accidents caused by reduced vigilance does exist. In the future, the high accuracy of vigilance estimation will play a significant role in public transportation safety. We propose a multimodal regression network that consists of multichannel deep autoencoders with subnetwork neurons (MCDAE(sn)). After we define two thresholds of 0.35 and 0.70 from the percentage of eye closure, the output values are in the continuous range of 0-0.35, 0.36-0.70, and 0.71-1 representing the awake state, the tired state, and the drowsy state, respectively. To verify the efficiency of our strategy, we first applied the proposed approach to a single modality. Then, for the multimodality, since the complementary information between forehead electrooculography and electroencephalography features, we found the performance of the proposed approach using features fusion significantly improved, demonstrating the effectiveness and efficiency of our method.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据