论文标题

公平意识到知识图的可解释建议

Fairness-Aware Explainable Recommendation over Knowledge Graphs

论文作者

Fu, Zuohui, Xian, Yikun, Gao, Ruoyuan, Zhao, Jieyu, Huang, Qiaoying, Ge, Yingqiang, Xu, Shuyuan, Geng, Shijie, Shah, Chirag, Zhang, Yongfeng, de Melo, Gerard

论文摘要

最近,人们越来越关注公平考虑因素,尤其是在智能决策系统的背景下。尤其是可解释的推荐系统可能会遭受解释偏见和性能差异。在本文中,我们根据他们的活动水平分析了不同的用户组,并发现不同组之间的建议性能中存在偏见。我们表明,由于对不活动用户的培训数据不足,因此不活跃的用户可能更容易收到不令人满意的建议,并且由于协作过滤的性质,他们的建议可能会受到更多活跃用户的培训记录的偏见,这导致系统不公平的治疗方法。我们通过启发式重新排序提出了一种公平的约束方法,以减轻对知识图的建议建议的情况。我们在几个现实世界中使用最先进的知识图表的可解释建议算法进行实验。有希望的结果表明,我们的算法不仅能够提供高质量的可解释建议,而且在几个方面都减少了建议不公平。

There has been growing attention on fairness considerations recently, especially in the context of intelligent decision making systems. Explainable recommendation systems, in particular, may suffer from both explanation bias and performance disparity. In this paper, we analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups. We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users, and that their recommendations may be biased by the training records of more active users, due to the nature of collaborative filtering, which leads to an unfair treatment by the system. We propose a fairness constrained approach via heuristic re-ranking to mitigate this unfairness problem in the context of explainable recommendation over knowledge graphs. We experiment on several real-world datasets with state-of-the-art knowledge graph-based explainable recommendation algorithms. The promising results show that our algorithm is not only able to provide high-quality explainable recommendations, but also reduces the recommendation unfairness in several respects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源