4.7 Article

Deep Multi-Modal Network Based Automated Depression Severity Estimation

期刊

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
卷 14, 期 3, 页码 2153-2167

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2022.3179478

关键词

Depression; Feature extraction; Three-dimensional displays; Convolutional neural networks; Optical flow; Long short term memory; Encoding; spatio-temporal networks; volume local directional structural pattern; temporal attentive pooling; multi-modal factorized bilinear pooling

向作者/读者索取更多资源

This paper proposes a novel deep multi-modal framework that effectively utilizes facial and verbal cues for automated depression assessment. By analyzing audio and video data and applying specific algorithms and strategies, this method can diagnose depression more accurately than other existing methods.
Depression is a severe mental illness that impairs a person's capacity to function normally in personal and professional life. The assessment of depression usually requires a comprehensive examination by an expert professional. Recently, machine learning-based automatic depression assessment has received considerable attention for a reliable and efficient depression diagnosis. Various techniques for automated depression detection were developed; however, certain concerns still need to be investigated. In this work, we propose a novel deep multi-modal framework that effectively utilizes facial and verbal cues for an automated depression assessment. Specifically, we first partition the audio and video data into fixed-length segments. Then, these segments are fed into the Spatio-Temporal Networks as input, which captures both spatial and temporal features as well as assigns higher weights to the features that contribute most. In addition, Volume Local Directional Structural Pattern (VLDSP) based dynamic feature descriptor is introduced to extract the facial dynamics by encoding the structural aspects. Afterwards, we employ the Temporal Attentive Pooling (TAP) approach to summarize the segment-level features for audio and video data. Finally, the multi-modal factorized bilinear pooling (MFB) strategy is applied to fuse the multi-modal features effectively. An extensive experimental study reveals that the proposed method outperforms state-of-the-art approaches.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据