4.7 Article

Dual constraints and adversarial learning for fair recommenders

期刊

KNOWLEDGE-BASED SYSTEMS
卷 239, 期 -, 页码 -

出版社

ELSEVIER
DOI: 10.1016/j.knosys.2021.108058

关键词

Fair recommendation; Graph neural network; Recommender systems; Adversarial learning; equal qualification [5]; Ad recommenders display racial discrimi-

资金

  1. National Natural Science Foundation of China [61772103, 62076046, 61976036, 62006034]
  2. Ministry of Educa-tion Humanities and Social Science Project [19YJCZH199]

向作者/读者索取更多资源

Recommender systems have a profound impact on people's lifestyles, but fairness problems have been identified. The presence of sensitive information in user behavior data leads to unfairness. To address this, a fairness-aware recommender model with dual fairness constraints is proposed, utilizing an adversarial graph neural network and fairness constraints to improve the fairness of recommendations.
Recommender systems, which are consist of common artificial intelligence technology, have a profound impact on the lifestyles of people. However, recent studies have demonstrated that recommender systems have fairness problems which means that some people with certain attributes are treated unfairly. A fair recommender means that users with different attributes achieve the same recommender accuracy. In particular, the recommender systems completely rely on users' behavior data for preferences learning, which leads to a high probability of unfair problems because that the behavior data usually contains sensitive information of users. Unfortunately, there are a few studies exploring unfair problem in recommender systems. To alleviate this problem, we present a novel fairnessaware recommender with dual fairness constraints (FRFC) to improve fairness in recommendations and protect the user's sensitive information from being exposed. This model has several advantages: one advantage is that an adversarial-based graph neural network (GNN) is proposed to prevent the target user being infected by sensitive features of neighbor users; another advantage is that two fairness constraints are proposed to solve the problems of adversarial classifier failures in whole data and unfair ranking losses. With this design, the FRFC model can effectively filter out users' sensitive information and give users of different attributes the same training opportunities, which is helpful for making a fair recommendation. Finally, extensive experiments demonstrate that the proposed model can significantly improve the fairness of recommendation results. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据