论文标题
挤压训练对抗鲁棒
Squeeze Training for Adversarial Robustness
论文作者
论文摘要
深度神经网络(DNNS)对对抗性例子的脆弱性吸引了机器学习社区。该问题与正常获得的损失景观的非平静和不平滑度有关。用对抗性例子(又称对抗训练)增强的训练被认为是一种有效的补救措施。在本文中,我们强调说,一些协作示例几乎与对抗性和良性示例几乎无法区分,但可以显示出极低的预测损失,可用于增强对抗性训练。因此,提出了一种新颖的方法来实现对抗性鲁棒性的新最新方法。代码:https://github.com/qizhangli/st-at。
The vulnerability of deep neural networks (DNNs) to adversarial examples has attracted great attention in the machine learning community. The problem is related to non-flatness and non-smoothness of normally obtained loss landscapes. Training augmented with adversarial examples (a.k.a., adversarial training) is considered as an effective remedy. In this paper, we highlight that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples yet show extremely lower prediction loss, can be utilized to enhance adversarial training. A novel method is therefore proposed to achieve new state-of-the-arts in adversarial robustness. Code: https://github.com/qizhangli/ST-AT.