论文标题
Fairod:公平意识的异常值检测
FairOD: Fairness-aware Outlier Detection
论文作者
论文摘要
公平性和异常检测(OD)密切相关,因为OD正是在给定人群中发现罕见的少数族裔样本的目标。但是,当成为少数派(由受保护变量定义,例如种族/种族/性别/年龄)并不能反映正级会员资格(例如犯罪/欺诈),OD会产生不公正的结果。令人惊讶的是,公平的机器学习文献主要集中在监督的环境中,公平感知的OD几乎没有受到影响。我们的工作旨在弥合这一差距。具体而言,我们开发了Desiderata,以捕获OD的良好动机公平标准,并系统地正式化了公平的OD问题。此外,在我们的Desiderata的指导下,我们提出了Fairod,这是一种公平意识的离群探测器,具有以下理想的特性:Fairod(1)在测试时表现出治疗奇偶校验,(2)旨在标记所有组中的样本相等的样本(即通过统计范围获得群体公平,以统计范围获得群体公平),以及(3)在每个组中都有真正的高级样本,并在每个组中进行旗帜。对一组合成和现实世界数据集进行的广泛实验表明,FaiROD会产生相对于受保护变量公平的结果,同时在检测性能方面与公平性敏锐的探测器相当(在某些情况下,甚至更好)。
Fairness and Outlier Detection (OD) are closely related, as it is exactly the goal of OD to spot rare, minority samples in a given population. However, when being a minority (as defined by protected variables, such as race/ethnicity/sex/age) does not reflect positive-class membership (such as criminal/fraud), OD produces unjust outcomes. Surprisingly, fairness-aware OD has been almost untouched in prior work, as fair machine learning literature mainly focuses on supervised settings. Our work aims to bridge this gap. Specifically, we develop desiderata capturing well-motivated fairness criteria for OD, and systematically formalize the fair OD problem. Further, guided by our desiderata, we propose FairOD, a fairness-aware outlier detector that has the following desirable properties: FairOD (1) exhibits treatment parity at test time, (2) aims to flag equal proportions of samples from all groups (i.e. obtain group fairness, via statistical parity), and (3) strives to flag truly high-risk samples within each group. Extensive experiments on a diverse set of synthetic and real world datasets show that FairOD produces outcomes that are fair with respect to protected variables, while performing comparable to (and in some cases, even better than) fairness-agnostic detectors in terms of detection performance.