Journal
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22)
Volume -, Issue -, Pages 2738-2747Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3485447.3511994
Keywords
zero-shot stance detection; contrastive learning; pretext task
Categories
Funding
- National Natural Science Foundation of China [61876053, 62006062, 62176076]
- UK Engineering and Physical Sciences Research Council [EP/V048597/1, EP/T017112/1]
- Natural Science Foundation of Guangdong Province of China [2019A1515011705]
- Shenzhen Foundational Research Funding [JCYJ20180507183527919, JCYJ20200109113441941, JCYJ20210324115614039]
- Shenzhen Science and Technology Innovation Program [KQTD20190929172835662]
- Turing AI Fellowship - UK Research and Innovation (UKRI) [EP/V020579/1]
- Joint Lab of HITSZ
- China Merchants Securities
Ask authors/readers for more resources
This paper proposes a framework for zero-shot stance detection that effectively distinguishes the types of stance features and learns transferable features. By treating stance feature type identification as a pretext task and using a hierarchical contrastive learning strategy to capture correlations and differences, the model is able to better represent the stance of previously unseen targets.
Zero-shot stance detection (ZSSD) is challenging as it requires detecting the stance of previously unseen targets during the inference stage. Being able to detect the target-related transferable stance features from the training data is arguably an important step in ZSSD. Generally speaking, stance features can be grouped into targetinvariant and target-specific categories. Target-invariant stance features carry the same stance regardless of the targets they are associated with. On the contrary, target-specific stance features only co-occur with certain targets. As such, it is important to distinguish these two types of stance features when learning stance features of unseen targets. To this end, in this paper, we revisit ZSSD from a novel perspective by developing an effective approach to distinguish the types (target-invariant/-specific) of stance features, so as to better learn transferable stance features. To be specific, inspired by self-supervised learning, we frame the stance-feature-type identification as a pretext task in ZSSD. Furthermore, we devise a novel hierarchical contrastive learning strategy to capture the correlation and difference between target-invariant and -specific features and further among different stance labels. This essentially allows the model to exploit transferable stance features more effectively for representing the stance of previously unseen targets. Extensive experiments on three benchmark datasets show that the proposed framework achieves the state-of-the-art performance in ZSSD.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available