Journal
PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS)
Volume -, Issue -, Pages 81-91Publisher
ASSOC COMPUTATIONAL LINGUISTICS-ACL
Keywords
-
Categories
Funding
- National Natural Science Foundation of China [61876053, 62006062, 62176076, 62006060]
- UK Engineering and Physical Sciences Research Council [EP/V048597/1, EP/T017112/1]
- Natural Science Foundation of Guangdong Province of China [2019A1515011705]
- Shenzhen Foundational Research Funding [JCYJ20200109113441941, JCYJ20210324115614039]
- Shenzhen Science and Technology Innovation Program [KQTD20190929172835662]
- Lab of HITSZ
- China Merchants Securities
- Turing AI Fellowship - UK Research and Innovation (UKRI) [EP/V020579/1]
Ask authors/readers for more resources
This paper proposes a joint contrastive learning framework for zero-shot stance detection, which achieves state-of-the-art performance by generalizing stance features and reasoning ability for unseen targets through stance contrastive learning and target-aware prototypical graph contrastive learning.
Zero-shot stance detection (ZSSD) aims to detect the stance for an unseen target during the inference stage. In this paper, we propose a joint contrastive learning (JointCL) framework, which consists of stance contrastive learning and target-aware prototypical graph contrastive learning. Specifically, a stance contrastive learning strategy is employed to better generalize stance features for unseen targets. Further, we build a prototypical graph for each instance to learn the target-based representation, in which the prototypes are deployed as a bridge to share the graph structures between the known targets and the unseen ones. Then a novel target-aware prototypical graph contrastive learning strategy is devised to generalize the reasoning ability of target-based stance representations to the unseen targets. Extensive experiments on three benchmark datasets show that the proposed approach achieves state-of-the-art performance in the ZSSD task(1).
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available