3.8 Proceedings Paper

Position Bias Mitigation: A Knowledge-Aware Graph Model for Emotion Cause Extraction

Publisher

ASSOC COMPUTATIONAL LINGUISTICS-ACL

Keywords

-

Funding

  1. EPSRC [EP/T017112/1, EP/V048597/1]
  2. University of Warwick
  3. Chinese Scholarship Council
  4. Turing AI Fellowship - UK Research and Innovation [EP/V020579/1]

Ask authors/readers for more resources

This study focuses on the dataset bias in Emotion Cause Extraction (ECE) task and proposes a new strategy to reduce the dependency on relative positions of clauses by generating adversarial examples. By introducing a graph-based method to explicitly model emotion triggering paths, the model enhances the understanding of semantic dependencies, making it more robust against adversarial attacks compared to existing models.
The Emotion Cause Extraction (ECE) task aims to identify clauses which contain emotion-evoking information for a particular emotion expressed in text. We observe that a widely-used ECE dataset exhibits a bias that the majority of annotated cause clauses are either directly before their associated emotion clauses or are the emotion clauses themselves. Existing models for ECE tend to explore such relative position information and suffer from the dataset bias. To investigate the degree of reliance of existing ECE models on clause relative positions, we propose a novel strategy to generate adversarial examples in which the relative position information is no longer the indicative feature of cause clauses. We test the performance of existing models on such adversarial examples and observe a significant performance drop. To address the dataset bias, we propose a novel graph-based method to explicitly model the emotion triggering paths by leveraging the commonsense knowledge to enhance the semantic dependencies between a candidate clause and an emotion clause. Experimental results show that our proposed approach performs on par with the existing state-of-the-art methods on the original ECE dataset, and is more robust against adversarial attacks compared to existing models.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

3.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available