论文标题

从有效利润正规化的角度来增强对抗性鲁棒性

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

论文作者

Liu, Ziquan, Chan, Antoni B.

论文摘要

在过去的几年中,深度神经网络(DNN)的对抗性脆弱性已积极研究。本文研究了跨凝性损失的规模变化特性,该损失是分类任务中最常用的损失函数及其对深神经网络的有效边缘和对抗性鲁棒性的影响。由于损耗函数并不是logit缩放的不变,因此增加有效的重量标准将使损耗接近零,并且其梯度消失,而有效边缘没有充分最大化。在典型的DNN上,我们证明,如果未正确正规化,则标准培训不会学会大量的有效利润,并导致对抗性脆弱性。为了最大程度地提高有效利润并学习强大的DNN,我们建议在训练过程中规范有效的体重规范。我们对馈电DNNS的实证研究表明,提出的有效利润正规化(EMR)学会了很大的有效利润率,并提高了标准和对抗性训练中的对抗性鲁棒性。在大规模模型上,我们表明EMR的表现优于基本的对抗训练,交易和两个正则化基线,并具有很大的改进。此外,当与几种强大的对抗防御方法(Mart and Mail)结合使用时,我们的EMR进一步增强了稳健性。

The adversarial vulnerability of deep neural networks (DNNs) has been actively investigated in the past several years. This paper investigates the scale-variant property of cross-entropy loss, which is the most commonly used loss function in classification tasks, and its impact on the effective margin and adversarial robustness of deep neural networks. Since the loss function is not invariant to logit scaling, increasing the effective weight norm will make the loss approach zero and its gradient vanish while the effective margin is not adequately maximized. On typical DNNs, we demonstrate that, if not properly regularized, the standard training does not learn large effective margins and leads to adversarial vulnerability. To maximize the effective margins and learn a robust DNN, we propose to regularize the effective weight norm during training. Our empirical study on feedforward DNNs demonstrates that the proposed effective margin regularization (EMR) learns large effective margins and boosts the adversarial robustness in both standard and adversarial training. On large-scale models, we show that EMR outperforms basic adversarial training, TRADES and two regularization baselines with substantial improvement. Moreover, when combined with several strong adversarial defense methods (MART and MAIL), our EMR further boosts the robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源