论文标题
在路上暴露后门:一种基于功能的有效防御文本后门攻击
Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks
论文作者
论文摘要
已知自然语言处理(NLP)模型容易受到后门攻击的影响,这对NLP模型构成了新的威胁。 NLP模型的事先在线后门防御方法仅关注输入或输出级别的异常,但仍遭受脆弱性攻击和高计算成本的困扰。在这项工作中,我们迈出的第一步在中间功能级别调查了文本中毒样本的不可思议,并提出了一种基于功能的有效在线防御方法。通过对现有攻击方法的广泛实验,我们发现中毒样品远离有毒NLP模型的中间特征空间中的干净样品。在这一观察结果的动机上,我们设计了一个基于距离的异常评分(DAN),以将中毒样品与特征水平的干净样品区分开。情感分析和犯罪检测任务的实验证明了DAN的优势,因为它在捍卫绩效方面实质上超过了现有的在线防御方法,并且享有较低的推理成本。此外,我们表明DAN还具有基于功能级正规化的自适应攻击能力。我们的代码可在https://github.com/lancopku/dan上找到。
Natural language processing (NLP) models are known to be vulnerable to backdoor attacks, which poses a newly arisen threat to NLP models. Prior online backdoor defense methods for NLP models only focus on the anomalies at either the input or output level, still suffering from fragility to adaptive attacks and high computational cost. In this work, we take the first step to investigate the unconcealment of textual poisoned samples at the intermediate-feature level and propose a feature-based efficient online defense method. Through extensive experiments on existing attacking methods, we find that the poisoned samples are far away from clean samples in the intermediate feature space of a poisoned NLP model. Motivated by this observation, we devise a distance-based anomaly score (DAN) to distinguish poisoned samples from clean samples at the feature level. Experiments on sentiment analysis and offense detection tasks demonstrate the superiority of DAN, as it substantially surpasses existing online defense methods in terms of defending performance and enjoys lower inference costs. Moreover, we show that DAN is also resistant to adaptive attacks based on feature-level regularization. Our code is available at https://github.com/lancopku/DAN.