论文标题
非人性化建议的双面公平
Two-Sided Fairness in Non-Personalised Recommendations
论文作者
论文摘要
推荐系统是几个在线平台上使用的最广泛使用的服务之一,可以向最终用户建议潜在的项目。这些服务通常使用不同的机器学习技术,公平性是一个有关因素,尤其是当下游服务能够引起社交后果时。因此,专注于新闻媒体平台中的非人性化(全球)建议(例如,Twitter上的Top-K趋势主题,新闻平台上的Top-K新闻等),我们一起讨论了两个特定的公平关注点(传统上分别研究)----用户公平和组织公平。在全球建议的情况下,用户公平概述了表示所有个人用户选择的想法,但组织公平试图确保政治/意识形态平衡的建议集。这使得用户公平是用户端的要求和组织公平性的平台端要求。为了用户公平,我们使用社会选择理论的方法进行测试,即,已知的各种投票规则可以更好地代表其结果中的用户选择。即使我们将投票规则应用于推荐设置,我们也可以观察到很高的用户满意度分数。现在,为了组织公平,我们提出了一个偏见指标,该指标衡量了推荐项目集的意识形态偏见(文章)。分析从基于投票规则的建议中获得的结果,我们发现,尽管众所周知的投票规则从用户方面更好,但它们显示出很高的偏见值,并且显然不适合平台的组织要求。因此,有必要通过凝结用户公平和组织公平的思想来构建一种包含的机制。在这篇抽象论文中,我们打算将基本思想以及这种机制要求背后的明确动机构建。
Recommender systems are one of the most widely used services on several online platforms to suggest potential items to the end-users. These services often use different machine learning techniques for which fairness is a concerning factor, especially when the downstream services have the ability to cause social ramifications. Thus, focusing on the non-personalised (global) recommendations in news media platforms (e.g., top-k trending topics on Twitter, top-k news on a news platform, etc.), we discuss on two specific fairness concerns together (traditionally studied separately)---user fairness and organisational fairness. While user fairness captures the idea of representing the choices of all the individual users in the case of global recommendations, organisational fairness tries to ensure politically/ideologically balanced recommendation sets. This makes user fairness a user-side requirement and organisational fairness a platform-side requirement. For user fairness, we test with methods from social choice theory, i.e., various voting rules known to better represent user choices in their results. Even in our application of voting rules to the recommendation setup, we observe high user satisfaction scores. Now for organisational fairness, we propose a bias metric which measures the aggregate ideological bias of a recommended set of items (articles). Analysing the results obtained from voting rule-based recommendation, we find that while the well-known voting rules are better from the user side, they show high bias values and clearly not suitable for organisational requirements of the platforms. Thus, there is a need to build an encompassing mechanism by cohesively bridging ideas of user fairness and organisational fairness. In this abstract paper, we intend to frame the elementary ideas along with the clear motivation behind the requirement of such a mechanism.