论文标题
公平分类的分配强大的方法
A Distributionally Robust Approach to Fair Classification
论文作者
论文摘要
我们提出了一个不公平的刑罚,提出了一个稳健的逻辑回归模型,该模型阻止了诸如性别或种族等敏感属性的歧视。如果以训练数据的经验分布为中心,则该模型等同于可处理的凸优化问题,如果使用新的凸出不公平措施来激励均衡的机会,则使用了训练数据的经验分布。我们证明,所得的分类器在合成数据集和真实数据集的边缘损失时提高了公平性。我们还通过利用对Wasserstein Ball的最佳不确定性定量来利用技术来得出基于线性编程的置信度范围。
We propose a distributionally robust logistic regression model with an unfairness penalty that prevents discrimination with respect to sensitive attributes such as gender or ethnicity. This model is equivalent to a tractable convex optimization problem if a Wasserstein ball centered at the empirical distribution on the training data is used to model distributional uncertainty and if a new convex unfairness measure is used to incentivize equalized opportunities. We demonstrate that the resulting classifier improves fairness at a marginal loss of predictive accuracy on both synthetic and real datasets. We also derive linear programming-based confidence bounds on the level of unfairness of any pre-trained classifier by leveraging techniques from optimal uncertainty quantification over Wasserstein balls.