4.7 Article

Multi-View Multi-Label Fine-Grained Emotion Decoding From Human Brain Activity

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2022.3217767

Keywords

Decoding; Brain modeling; Functional magnetic resonance imaging; Predictive models; Emotion recognition; Dimensionality reduction; Pattern recognition; Fine-grained emotion decoding; multi-label learning; multi-view learning; product of experts (PoEs); variational autoencoder

Funding

  1. National Key Research and Development Program of China [2021ZD0201503]
  2. National Natural Science Foundation of China [62206284, 61976209, 61906188]
  3. Beijing Natural Science Foundation [J210010, 7222311]
  4. Strategic Priority Research Program of CAS [XDB32040200]

Ask authors/readers for more resources

In this article, a novel multi-view multi-label hybrid model is proposed for fine-grained emotion decoding, which can accurately predict multiple emotional states of humans and overcome the limitations of existing methods in analyzing emotional expression.
Decoding emotional states from human brain activity play an important role in the brain-computer interfaces. Existing emotion decoding methods still have two main limitations: one is only decoding a single emotion category from a brain activity pattern and the decoded emotion categories are coarse-grained, which is inconsistent with the complex emotional expression of humans; the other is ignoring the discrepancy of emotion expression between the left and right hemispheres of the human brain. In this article, we propose a novel multi-view multi-label hybrid model for fine-grained emotion decoding (up to 80 emotion categories) which can learn the expressive neural representations and predict multiple emotional states simultaneously. Specifically, the generative component of our hybrid model is parameterized by a multi-view variational autoencoder, in which we regard the brain activity of left and right hemispheres and their difference as three distinct views and use the product of expert mechanism in its inference network. The discriminative component of our hybrid model is implemented by a multi-label classification network with an asymmetric focal loss. For more accurate emotion decoding, we first adopt a label-aware module for emotion-specific neural representation learning and then model the dependency of emotional states by a masked self-attention mechanism. Extensive experiments on two visually evoked emotional datasets show the superiority of our method.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available