论文标题

关于解开和本地公平的表示

On Disentangled and Locally Fair Representations

论文作者

Gurovich, Yaron, Benaim, Sagie, Wolf, Lior

论文摘要

我们研究以对种族和性别等敏感群体公平的方式进行分类的问题。这个问题是通过散布和当地公平代表的镜头解决的。我们学习了当地公平的表示,因此,在学识渊博的表示下,每个样本的邻居在敏感属性方面都是平衡的。例如,当做出雇用个人的决定时,我们确保$ k $最相似的雇用个人在种族上保持平衡。至关重要的是,我们确保根据与种族无关的属性找到类似的人。为此,我们将嵌入空间分为两个表示形式。其中第一个与敏感属性相关,而第二个则与敏感属性相关。我们仅将当地公平目标应用于第二个不相关的代表。通过一组实验,我们证明了散布和局部公平的必要性,以获得公平,准确的表示。我们在现实世界中评估了我们的方法,例如预测收入和重新监禁率,并证明了我们方法的优势。

We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender. This problem is tackled through the lens of disentangled and locally fair representations. We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute. For instance, when a decision is made to hire an individual, we ensure that the $K$ most similar hired individuals are racially balanced. Crucially, we ensure that similar individuals are found based on attributes not correlated to their race. To this end, we disentangle the embedding space into two representations. The first of which is correlated with the sensitive attribute while the second is not. We apply our local fairness objective only to the second, uncorrelated, representation. Through a set of experiments, we demonstrate the necessity of both disentangled and local fairness for obtaining fair and accurate representations. We evaluate our method on real-world settings such as predicting income and re-incarceration rate and demonstrate the advantage of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源