论文标题

重新思考公平:对霸权ML公平方法的批评的跨学科调查

Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

论文作者

Weinberg, Lindsay

论文摘要

这篇调查文章评估并比较了当前增强公平技术干预措施(ML)的现有批评,这些批评从一系列非计算学科中汲取了资深,包括哲学,女权研究,批判种族和种族研究,法律研究,人类学,人类学以及科学技术研究。它桥接了认知鸿沟,以便对霸权计算方法的可能性和局限性提供ML公平性的可能性和局限性,以便为社会最边缘化的社会产生公正的结果。该文章是根据九个主要主题组织的,其中这些不同的领域相交:1)AI公平研究中的“公平”如何定义; 2)AI系统如何解决问题; 3)抽象对AI工具如何运作及其导致技术解决方案主义的倾向的影响; 4)种族分类如何在AI公平研究中运作; 5)采取AI公平措施避免监管和进行道德洗涤; 6)在AI公平考虑方面没有参与性设计和民主审议; 7)基于“偏见”的数据收集实践是非自愿的,并且缺乏透明度; 8)将边缘化组的掠夺性纳入AI系统; 9)缺乏与AI的长期社会和道德成果的参与。这篇文章从这些批评中得出结论,通过想象未来的ML公平研究方向,积极破坏了社会中根深蒂固的动力动力和结构性不公正现象。

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions into machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源