4.7 Article

Integrating Deep and Shallow Models for Multi-Modal Depression Analysis-Hybrid Architectures

Journal

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
Volume 12, Issue 1, Pages 239-253

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TAFFC.2018.2870398

Keywords

Depression estimation; depression classification; deep convolutional neural network-deep neural network (DCNN-DNN); paragraph vector-support vector machine (PV-SVM); random forest; histogram of displacement range (HDR)

Funding

  1. Shaanxi Provincial International Science and Technology Collaboration Project [2017KW-ZD-14]
  2. National Natural Science Foundation of China [61273265]
  3. VUB Interdisciplinary Research Program through the EMO-App project

Ask authors/readers for more resources

This paper emphasizes the importance of text-based features in addition to audio and video features for automatic depression assessment systems. It also proposes a hybrid framework that incorporates deep and shallow models to analyze depression-related textual indicators, and introduces new text and video features for depression estimation and classification. The experiments conducted on the AVEC2016 depression dataset demonstrate that the proposed framework effectively improves accuracy and outperforms existing features in terms of depression recognition performance.
At present, although great progress has been made in automatic depression assessment, most of the recent works only concern the audio and video paralinguistic information, rather than the linguistic information from the spoken content. In this work, we argue that beside developing good audio and video features, to build reliable depression detection systems, text-based content features are also of importance to analyse depression-related textual indicators. Furthermore, to improve the performance of automatic depression assessment systems, powerful models, capable of modelling the characteristics of depression embedded in the audio, visual and text descriptors, are also required. This paper proposes new text and video features and hybridizes deep and shallow models for depression estimation and classification from audio, video and text descriptors. The proposed hybrid framework consists of three main parts: 1) A Deep Convolutional Neural Network (DCNN) and Deep Neural Network (DNN) based audio-visual multi-modal depression recognition model for estimating the Patient Health Questionnaire depression scale (PHQ-8); 2) A Paragraph Vector (PV) and Support Vector Machine (SVM) based model for inferring the physical and mental conditions of the individual from the transcripts of the interview; 3) A Random Forest (RF) model for depression classification from the estimated PHQ-8 score and the inferred conditions of the individual. In the PV-SVM model, PV embedding is used to obtain fixed-length feature vectors from transcripts of the answers to the questions associated with psychoanalytic aspects of depression, which are subsequently fed into the SVM classifiers for detecting the presence/absence of the considered psychoanalytic symptoms. To our best knowledge, this approach is the first attempt to apply PV for depression analysis. Besides, we propose a new visual descriptor - Histogram of Displacement Range (HDR) to characterize the displacement and velocity of the facial landmarks in the video segment. Experiments have been carried out on the Audio Visual Emotion Challenge (AVEC2016) depression dataset, they demonstrate that: 1) The proposed hybrid framework effectively improves the accuracies of both depression estimation and depression classification, with an average F1 measure up to 0.746, which is higher than the best result (0.724) of the depression sub-challenge of AVEC2016. 2) HDR obtains better depression recognition performance than Bag-of-Words (BoW) and Motion History Histogram (MHH) features.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available