Journal
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
Volume 30, Issue -, Pages 2003-2011Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNSRE.2022.3192431
Keywords
Electroencephalography; Feature extraction; Behavioral sciences; Brain modeling; Variable speed drives; Pediatrics; Neuroimaging; Autism spectrum disorders (ASD); multimodal fusion; electroencephalogram (EEG); eye-tracking (ET); stacked denoising autoencoders; classification
Categories
Funding
- National Natural Science Foundation of China [62003228, 61761166003]
- Science and Technology Development Project of Beijing Municipal Education Commission of China [KM202010028019]
- National Key Research and Development Program of China
Ask authors/readers for more resources
This paper proposes a multimodal diagnosis framework for identifying autism spectrum disorder (ASD) in children by combining electroencephalogram (EEG) and eye-tracking (ET) data. The proposed method utilizes deep learning algorithms to learn and fuse features from both modalities, achieving superior performance compared to unimodal methods for accurate ASD diagnosis.
Identification of autism spectrum disorder (ASD) in children is challenging due to the complexity and heterogeneity of ASD. Currently, most existing methods mainly rely on a single modality with limited information and often cannot achieve satisfactory performance. To address this issue, this paper investigates from internal neurophysiological and external behavior perspectives simultaneously and proposes a new multimodal diagnosis framework for identifying ASD in children with fusion of electroencephalogram (EEG) and eye-tracking (ET) data. Specifically, we designed a two-step multimodal feature learning and fusion model based on a typical deep learning algorithm, stacked denoising autoencoder (SDAE). In the first step, two SDAE models are designed for feature learning for EEG and ET modality, respectively. Then, a third SDAE model in the second step is designed to perform multimodal fusion with learned EEG and ET features in a concatenated way. Our designed multimodal identification model can automatically capture correlations and complementarity from behavior modality and neurophysiological modality in a latent feature space, and generate informative feature representations with better discriminability and generalization for enhanced identification performance. We collected a multimodal dataset containing 40 ASD children and 50 typically developing (TD) children to evaluate our proposed method. Experimental results showed that our proposed method achieved superior performance compared with two unimodal methods and a simple feature-level fusion method, which has promising potential to provide an objective and accurate diagnosis to assist clinicians.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available