3.8 Proceedings Paper

Alzheimer's Dementia Recognition Using Acoustic, Lexical, Disfluency and Speech Pause Features Robust to Noisy Inputs

Journal

INTERSPEECH 2021
Volume -, Issue -, Pages 3820-3824

Publisher

ISCA-INT SPEECH COMMUNICATION ASSOC
DOI: 10.21437/Interspeech.2021-1633

Keywords

Cognitive Decline Detection; Alzheimer's dementia; disfluency; lexical predictibility

Funding

  1. EPSRC [EP/S033564/1]
  2. European Union [769661, 825153]

Ask authors/readers for more resources

We propose two multimodal fusion-based deep learning models that utilize ASR transcribed speech and acoustic data to classify if a speaker has Alzheimer's Disease and to what degree. Our best model achieves 84% accuracy and 4.26 RSME error prediction on MMSE cognitive scores, showing considerable gains for AD classification using multimodal fusion and gating.
We present two multimodal fusion-based deep learning models that consume ASR transcribed speech and acoustic data simultaneously to classify whether a speaker in a structured diagnostic task has Alzheimer's Disease and to what degree, evaluating the ADReSSo challenge 2021 data. Our best model, a BiLSTM with highway layers using words, word probabilities, disfluency features, pause information, and a variety of acoustic features, achieves an accuracy of 84% and RSME error prediction of 4.26 on MMSE cognitive scores. While predicting cognitive decline is more challenging, our models show improvement using the multimodal approach and word probabilities, disfluency, and pause information over word-only models. We show considerable gains for AD classification using multimodal fusion and gating, which can effectively deal with noisy inputs from acoustic features and ASR hypotheses.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available