论文标题

通过独立分割的分类器再培训提高了组鲁棒性

Improved Group Robustness via Classifier Retraining on Independent Splits

论文作者

Nguyen, Thien Hang, Zhang, Hongyang R., Nguyen, Huy Le

论文摘要

通过最小化平均风险训练的深度神经网络可以实现强大的平均表现。尽管如此,如果亚组在总体数据人群中的代表性不足,则它们的子组的性能可能会降低。群体分布在稳健的优化(Sagawa等,2020a)或组DRO简称是一个广泛使用的基线,用于具有强大的群体性能的学习模型。我们注意到,此方法需要在培训时间为每个示例的小组标签,并且可以使小组过度适应,这需要强大的正则化。鉴于训练时间的群体标签有限,只有两次训练(Liu等,2021)或JTT是一种两阶段的方法,它是一种两阶段的方法,它首先在每个未标记的示例中渗透一个伪组标签,然后根据推断的组标签应用组DRO。推理过程也对过度拟合敏感,有时涉及其他超参数。本文根据培训数据的独立分割设计了基于分类器再培训的想法的简单方法。我们发现,使用新颖的样品分割程序在微调步骤中实现了强劲的最差群体性能。当对基准图像和文本分类任务进行评估时,我们的方法始终如一地对培训期间的组标签可用或仅在验证集中给出时,对组DRO,JTT和其他强质基线的表现始终如一。重要的是,我们的方法仅依赖于单个超参数,该参数调整了用于训练功能提取器与训练分类层的标签的比例。我们通过对最坏群体损失的概括分析来证明分裂方案的理由。

Deep neural networks trained by minimizing the average risk can achieve strong average performance. Still, their performance for a subgroup may degrade if the subgroup is underrepresented in the overall data population. Group distributionally robust optimization (Sagawa et al., 2020a), or group DRO in short, is a widely used baseline for learning models with strong worst-group performance. We note that this method requires group labels for every example at training time and can overfit to small groups, requiring strong regularization. Given a limited amount of group labels at training time, Just Train Twice (Liu et al., 2021), or JTT in short, is a two-stage method that infers a pseudo group label for every unlabeled example first, then applies group DRO based on the inferred group labels. The inference process is also sensitive to overfitting, sometimes involving additional hyperparameters. This paper designs a simple method based on the idea of classifier retraining on independent splits of the training data. We find that using a novel sample-splitting procedure achieves robust worst-group performance in the fine-tuning step. When evaluated on benchmark image and text classification tasks, our approach consistently performs favorably to group DRO, JTT, and other strong baselines when either group labels are available during training or are only given in validation sets. Importantly, our method only relies on a single hyperparameter, which adjusts the fraction of labels used for training feature extractors vs. training classification layers. We justify the rationale of our splitting scheme with a generalization-bound analysis of the worst-group loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源