论文标题
视觉数据集中的性别工件
Gender Artifacts in Visual Datasets
论文作者
论文摘要
已知性别偏见存在于大规模的视觉数据集中,并且可以在下游模型中反映甚至扩大。许多先前的作品通常通过尝试从图像中删除性别表达信息来减轻性别偏见。为了了解这些方法的可行性和实用性,我们研究了大规模视觉数据集中存在的$ \ textIt {gengender伪像} $。我们将$ \ textit {性别伪像} $定义为与性别相关的视觉提示,专门针对那些由现代图像分类器学习并具有可解释的人类推论的线索。通过我们的分析,我们发现性别伪像在可可和开放型数据集中无处不在,从低级信息(例如,颜色通道的平均值)到图像的高级组成(例如,姿势和人的位置)到处都是无处不在的。鉴于性别文物的流行,我们声称试图从此类数据集中删除性别文物的尝试是不可行的。取而代之的是,责任在于研究人员和从业人员要意识到,数据集中图像的分布是高度性别的,因此开发了对这些分布跨组的分配变化稳健的方法。
Gender biases are known to exist within large-scale visual datasets and can be reflected or even amplified in downstream models. Many prior works have proposed methods for mitigating gender biases, often by attempting to remove gender expression information from images. To understand the feasibility and practicality of these approaches, we investigate what $\textit{gender artifacts}$ exist within large-scale visual datasets. We define a $\textit{gender artifact}$ as a visual cue that is correlated with gender, focusing specifically on those cues that are learnable by a modern image classifier and have an interpretable human corollary. Through our analyses, we find that gender artifacts are ubiquitous in the COCO and OpenImages datasets, occurring everywhere from low-level information (e.g., the mean value of the color channels) to the higher-level composition of the image (e.g., pose and location of people). Given the prevalence of gender artifacts, we claim that attempts to remove gender artifacts from such datasets are largely infeasible. Instead, the responsibility lies with researchers and practitioners to be aware that the distribution of images within datasets is highly gendered and hence develop methods which are robust to these distributional shifts across groups.