论文标题

在存在隐性偏见的情况下排名的干预措施

Interventions for Ranking in the Presence of Implicit Bias

论文作者

Celis, L. Elisa, Mehrotra, Anay, Vishnoi, Nisheeth K.

论文摘要

隐性偏见是特定素质(或缺乏特定品质)的无意识归因(例如,由性别或种族定义)。关于隐式偏见的研究表明,这些无意识的刻板印象在各种社会环境中可能会产生不利的结果,例如工作筛查,教学或治安。最近,(Kleinberg and Raghavan,2018年)考虑了一个隐式偏见的数学模型,并将鲁尼规则的有效性作为限制,以改善子集选择问题某些情况的结果。在这里,我们研究了设计干预措施以概括子集选择的问题(排名),该问题需要输出有序集,并且在各种社会和计算环境中都是中心原始的。我们提出了一个简单且可解释的约束家庭,并表明它们可以最佳地减轻隐性偏见,以概括研究中所研究的模型(Kleinberg and Raghavan,2018年)。随后,我们证明,在对项目公用事业的自然分配假设下,简单,类似鲁尼的规则,约束也可以令人惊讶地恢复几乎由于隐性偏见而丢失的所有效用。最后,我们通过有关IIT-JEE(2009)数据集和语义学者研究语料库的现实世界分布的经验发现来增强理论结果。

Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group (e.g., defined by gender or race). Studies on implicit bias have shown that these unconscious stereotypes can have adverse outcomes in various social contexts, such as job screening, teaching, or policing. Recently, (Kleinberg and Raghavan, 2018) considered a mathematical model for implicit bias and showed the effectiveness of the Rooney Rule as a constraint to improve the utility of the outcome for certain cases of the subset selection problem. Here we study the problem of designing interventions for the generalization of subset selection -- ranking -- that requires to output an ordered set and is a central primitive in various social and computational contexts. We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias for a generalization of the model studied in (Kleinberg and Raghavan, 2018). Subsequently, we prove that under natural distributional assumptions on the utilities of items, simple, Rooney Rule-like, constraints can also surprisingly recover almost all the utility lost due to implicit biases. Finally, we augment our theoretical results with empirical findings on real-world distributions from the IIT-JEE (2009) dataset and the Semantic Scholar Research corpus.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源