4.7 Article

Fortifying Brain Signals for Robust Interpretation

Journal

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TNSE.2022.3222362

Keywords

Electroencephalography; Training; Brain modeling; Image reconstruction; Generators; Deep learning; Testing; Brain Media; Deep Learning; EEG; Multimodality learning; Signal disruption

Ask authors/readers for more resources

Brain-Media is a discipline that decodes sophisticated human brain activity, such as imagination, memories, colors, textures, patterns, etc. Existing efforts either classify brain signals or map them to an image of the same class, but they ignore the existence of disruptive noises. This research proposes a multimodality time-series and spatial-domain hybrid framework and a unique ResilientNet Generator to robustly classify signals.
Brain-Media, is the discipline of decoding sophisticated human brain activity such as imagination, memories, colours, textures, patterns, etc. Existing efforts either classify brain signals or map them to an image of the same class. The second technique has only been investigated in a few papers using Electro Encephalography (EEG) datasets based on ImageNet and MNIST images. The role of Deep Neural Networks (DNN) must be researched to make them robust against malicious noise. Existing frameworks ignore the existence of such disruptive noises. The current research provides a multimodality time-series and spatial-domain hybrid framework and a unique ResilientNet Generator to classify signals robustly. In the first stage, two teachers are trained using a time series and an image dataset that shares the class. The second phase trains a ResilientNet-Generator with a new penalized-reconstruction loss-function apart from the adversarial loss. Finally, the trained ResilientNet-Generator is used as a pre-processing module for training a DNN-classifier in the third phase. It is noted that representing time-series data by the spatial domain can significantly improve accuracy compared to existing approaches. An ablation study on the resilience of trained classifiers against attacked test samples shows that DNNs can be fortified using the proposed framework.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available