4.7 Article

Towards Contrastive Context-Aware Conversational Emotion Recognition

相关参考文献

注意:仅列出部分参考文献,下载原文获取全部文献信息。
Article Computer Science, Artificial Intelligence

A multi-view network for real-time emotion recognition in conversations

Hui Ma et al.

Summary: This paper focuses on the task of real-time emotion recognition in conversations, which involves identifying the emotion of a query utterance by using the historical context. The study finds that existing methods mainly focus on individual utterances and utilize utterance-level features to model the emotion representation of the query. However, they overlook the word-level dependencies among different utterances. Therefore, the paper proposes a multi-view network (MVN) that explores the emotion representation of a query from both word- and utterance-level perspectives.

KNOWLEDGE-BASED SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

BiERU: Bidirectional emotional recurrent unit for conversational sentiment analysis

Wei Li et al.

Summary: This paper presents a fast, compact, and parameter-efficient framework for conversational sentiment analysis, which outperforms the state of the art in most cases according to extensive experiments on three standard datasets.

NEUROCOMPUTING (2022)

Article Computer Science, Cybernetics

Emotional Conversation Generation With Bilingual Interactive Decoding

Jiamin Wang et al.

Summary: This article introduces a bilingual-aided interactive approach for generating bilingual emotional replies to monolingual posts, which outperforms several state-of-the-art approaches in terms of content and emotion of replies according to qualitative and quantitative experiments with NLPCC2017.

IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS (2022)

Article Computer Science, Artificial Intelligence

Adapted Dynamic Memory Network for Emotion Recognition in Conversation

Songlong Xing et al.

Summary: This article proposes an Adapted Dynamic Memory Network (A-DMN) for Emotion Recognition in Conversation (ERC), which effectively synthesizes self and inter-speaker influences and obtains refined representations through multiple iterations. Additionally, the study explores cross-modal fusion in multimodal ERC and presents a convolution-based method that is efficient in extracting local interactions.

IEEE TRANSACTIONS ON AFFECTIVE COMPUTING (2022)

Article Acoustics

CTNet: Conversational Transformer Network for Emotion Recognition

Zheng Lian et al.

Summary: The study proposes a multimodal learning framework for conversational emotion recognition, named conversational transformer network (CTNet). By modeling intra-modal and cross-modal interactions, capturing temporal information using lexical and acoustic features, and utilizing a bi-directional GRU component and speaker embeddings to model context-sensitive and speaker-sensitive dependencies, the experimental results demonstrate the effectiveness of the method. The approach shows a performance improvement of 2.1% to 6.2% on weighted average F1 over state-of-the-art strategies.

IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING (2021)

Article Computer Science, Information Systems

Emotion Recognition in Conversation: Research Challenges, Datasets, and Recent Advances

Soujanya Poria et al.

IEEE ACCESS (2019)