论文标题

盾牌:捍卫文本神经网络针对多个黑盒对抗攻击,随机多expert patcher

SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher

论文作者

Le, Thai, Park, Noseong, Lee, Dongwon

论文摘要

即使已经提出了几种方法来捍卫文本神经网络(NN)模型免受黑盒对抗攻击的影响,但他们经常防御特定的文本扰动策略和/或要求将模型从Scratch重新训练。这导致在实践和冗余计算中缺乏概括。特别是,最先进的变压器模型(例如,伯特,罗伯塔)需要大量的时间和计算资源。通过从软件工程中借用一个想法,为了解决这些局限性,我们提出了一种新颖的算法,Shield,它仅修改和重新训练文本NN的最后一层,从而“补丁”并“将NN转换为多型预测的随机加权整体。考虑到大多数当前的黑盒攻击都依赖于迭代搜索机制来优化其对抗性扰动,因此Shield通过自动利用不同的预测变量的预测变量来使攻击者混淆,具体取决于输入。换句话说,Shield打破了对攻击的基本假设,这是受害者NN模型在攻击期间保持不变。通过进行全面的实验,我们证明了CNN,RNN,Bert和Roberta的所有基于Shield曾经修补的文本NNS的相对增强,平均相对增强为15%-70%的准确性,对于14个不同的黑盒子攻击,在3个公共数据集中超过了6个不同的黑盒攻击,超过了6个防御性底座。所有代码均应发布。

Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch. This leads to a lack of generalization in practice and redundant computation. In particular, the state-of-the-art transformer models (e.g., BERT, RoBERTa) require great time and computation resources. By borrowing an idea from software engineering, in order to address these limitations, we propose a novel algorithm, SHIELD, which modifies and re-trains only the last layer of a textual NN, and thus it "patches" and "transforms" the NN into a stochastic weighted ensemble of multi-expert prediction heads. Considering that most of current black-box attacks rely on iterative search mechanisms to optimize their adversarial perturbations, SHIELD confuses the attackers by automatically utilizing different weighted ensembles of predictors depending on the input. In other words, SHIELD breaks a fundamental assumption of the attack, which is a victim NN model remains constant during an attack. By conducting comprehensive experiments, we demonstrate that all of CNN, RNN, BERT, and RoBERTa-based textual NNs, once patched by SHIELD, exhibit a relative enhancement of 15%--70% in accuracy on average against 14 different black-box attacks, outperforming 6 defensive baselines across 3 public datasets. All codes are to be released.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源