4.5 Article

Improving Transformer-based Sequential Recommenders through Preference Editing

Journal

ACM TRANSACTIONS ON INFORMATION SYSTEMS
Volume 41, Issue 3, Pages -

Publisher

ASSOC COMPUTING MACHINERY
DOI: 10.1145/3564282

Keywords

Transformer-based sequential recommendation; self-supervised learning; user preference extraction and representation

Ask authors/readers for more resources

One of the key challenges in sequential recommendation is how to extract and represent user preferences. We propose a transformer-based sequential recommendation model, named MrTransformer, to explore multiple user preferences. MrTransformer employs preference-editing-based self-supervised learning mechanism to disentangle user preferences into multiple independent representations, improving preference extraction and representation. Experiments show that MrTransformer with preference editing outperforms state-of-the-art methods in terms of Recall, MRR, and NDCG, especially for long sequences of interactions.
One of the key challenges in sequential recommendation is how to extract and represent user preferences. Traditional methods rely solely on predicting the next item. But user behavior may be driven by complex preferences. Therefore, these methods cannot make accurate recommendations when the available information user behavior is limited. To explore multiple user preferences, we propose a transformer-based sequential recommendation model, named MrTransformer (Multi-preference Transformer). For training MrTransformer, we devise a preference-editing-based self-supervised learning (SSL) mechanism that explores extra supervision signals based on relations with other sequences. The idea is to force the sequential recommendation model to discriminate between common and unique preferences in different sequences of interactions. By doing so, the sequential recommendation model is able to disentangle user preferences into multiple independent preference representations so as to improve user preference extraction and representation. We carry out extensive experiments on five benchmark datasets. MrTransformer with preference editing significantly outperforms state-of-the-art sequential recommendation methods in terms of Recall, MRR, and NDCG. We find that long sequences of interactions from which user preferences are harder to extract and represent benefit most from preference editing.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available