Journal
KNOWLEDGE-BASED SYSTEMS
Volume 278, Issue -, Pages -Publisher
ELSEVIER
DOI: 10.1016/j.knosys.2023.110863
Keywords
Interpretable sentiment analysis; First order logic; Contrastive learning; Knowledge reasoning
Categories
Ask authors/readers for more resources
This paper proposes a novel framework called Contrasting Logical Knowledge Learning (CLK) that addresses the challenge of balancing accuracy and interpretability in deep learning models for sentiment analysis. Empirical results demonstrate that CLK effectively achieves high accuracy and provides human-understandable explanations.
Although interpretable methods for deep learning models have become popular in sentiment analysis domains in recent years, existing methods still face the challenge of providing predictions with both high accuracy and user-friendly explanations. To address this problem, we propose a novel framework called Contrasting Logical Knowledge Learning (CLK) that utilizes contrastive learning, label knowledge, and logical rule learning. Logical rule learning is used to provide human-understandable explanations while label knowledge and contrastive learning are used to achieve high performance on both pretrained models and ordinary DNNs. To ensure model interpretability, we design a novel knowledge reasoning strategy based on learned logical rules and trained models. Empirical results from binary sentiment analysis tasks and fine-grained sentiment analysis tasks show that CLK can effectively balance accuracy and interpretability. Additionally, we conduct two case studies to demonstrate the process of explanation generation and knowledge reasoning, which shows that our method's explanations are causally consistent with the implicit model decision logic. & COPY; 2023 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available