论文标题

强大的公平聚类:一种新颖的公平攻击和防御框架

Robust Fair Clustering: A Novel Fairness Attack and Defense Framework

论文作者

Chhabra, Anshuman, Li, Peizhao, Mohapatra, Prasant, Liu, Hongfu

论文摘要

聚类算法广泛用于许多社会资源分配应用程序,例如贷款批准和候选人招聘等,因此,有偏见或不公平的模型输出可能会对依赖这些应用程序的个人产生不利影响。为此,最近已经提出了许多公平的聚类方法来抵消此问题。由于可能造成重大伤害,因此必须确保即使在对抗性影响下,公平的聚类算法也能提供一致的公平输出。但是,尚未从对抗性攻击的角度研究公平的聚类算法。与先前的研究相反,我们试图通过提出新型的黑盒公平攻击来弥合这一差距,并进行鲁棒性分析,以防止公平聚类。通过全面的实验,我们发现最先进的模型非常容易受到我们的攻击,因为它可以大大降低其公平性能。最后,我们提出了共识公平聚类(CFC),这是第一种健壮的公平聚类方法,将共识聚类转换为一个公平的图形分区问题,并且迭代地学习以生成公平的群集输出。在实验上,我们观察到CFC对拟议的攻击非常强大,因此是一种真正可靠的公平聚类替代方案。

Clustering algorithms are widely used in many societal resource allocation applications, such as loan approvals and candidate recruitment, among others, and hence, biased or unfair model outputs can adversely impact individuals that rely on these applications. To this end, many fair clustering approaches have been recently proposed to counteract this issue. Due to the potential for significant harm, it is essential to ensure that fair clustering algorithms provide consistently fair outputs even under adversarial influence. However, fair clustering algorithms have not been studied from an adversarial attack perspective. In contrast to previous research, we seek to bridge this gap and conduct a robustness analysis against fair clustering by proposing a novel black-box fairness attack. Through comprehensive experiments, we find that state-of-the-art models are highly susceptible to our attack as it can reduce their fairness performance significantly. Finally, we propose Consensus Fair Clustering (CFC), the first robust fair clustering approach that transforms consensus clustering into a fair graph partitioning problem, and iteratively learns to generate fair cluster outputs. Experimentally, we observe that CFC is highly robust to the proposed attack and is thus a truly robust fair clustering alternative.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源