论文标题
minimax auc公平:具有可证明收敛性的有效算法
Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
论文作者
论文摘要
在结果决策中使用机器学习模型通常会加剧社会不平等,特别是对种族和性别定义的边缘化群体成员产生不同的影响。 ROC曲线(AUC)下的区域被广泛用于评估机器学习中评分功能的性能,但与其他性能指标相比,在算法公平性中进行了研究。由于AUC的成对性质,定义基于AUC的组公平度量是成对依赖性的,并且可能涉及\ emph {group}和\ emph {group} aucs。重要的是,仅考虑一种AUC类别不足以减轻AUC优化的不公平性。在本文中,我们提出了一个最小值学习和偏见缓解框架,该框架既包含组内和组间AUC,同时保持实用性。基于这个Rawlsian框架,我们设计了一种有效的随机优化算法,并证明了其收敛到最小组级AUC。我们对合成数据集和现实数据集进行数值实验,以验证Minimax框架的有效性和所提出的优化算法。
The use of machine learning models in consequential decision making often exacerbates societal inequity, in particular yielding disparate impact on members of marginalized groups defined by race and gender. The area under the ROC curve (AUC) is widely used to evaluate the performance of a scoring function in machine learning, but is studied in algorithmic fairness less than other performance metrics. Due to the pairwise nature of the AUC, defining an AUC-based group fairness metric is pairwise-dependent and may involve both \emph{intra-group} and \emph{inter-group} AUCs. Importantly, considering only one category of AUCs is not sufficient to mitigate unfairness in AUC optimization. In this paper, we propose a minimax learning and bias mitigation framework that incorporates both intra-group and inter-group AUCs while maintaining utility. Based on this Rawlsian framework, we design an efficient stochastic optimization algorithm and prove its convergence to the minimum group-level AUC. We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.