论文标题

不能煮这本青蛙:基于在线训练的自动编码器的鲁棒性探测器对对抗中毒攻击

Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks

论文作者

Kravchik, Moshe, Shabtai, Asaf

论文摘要

近年来,文献中已经证明了工业控制系统(ICS)中各种有效的基于神经网络的方法和网络攻击检测。鉴于他们成功实施和广泛使用,需要研究对这种检测方法的对抗性攻击,以更好地保护依赖它们的系统。关于对图像和恶意软件分类的对抗性攻击进行的广泛研究与物理系统状态预测域几乎没有关系,该预测域大多数ICS攻击检测系统都属于。此外,这种检测系统通常是使用从监视系统中收集的新数据重新训练的,因此,对抗数据中毒的威胁很大,但是研究界尚未解决这种威胁。在本文中,我们介绍了第一项研究,重点是对基于在线自动编码器的攻击探测器中毒攻击。我们提出了两种用于生成毒药样本的算法,一种基于插值的算法和基于后梯度优化的算法,我们对合成和现实世界ICS数据进行了评估。我们证明了所提出的算法可以生成导致目标攻击的毒药样本,但自动编码器检测器未发现,但是毒害检测​​器的能力仅限于一系列的攻击类型和大幅度。当将毒物生成算法应用于流行的SWAT数据集时,我们表明,在物理系统数据上训练的自动编码器检测器面对数据集中所有相关攻击的所有十个相关攻击。这一发现表明,与其他问题域相比,网络物理领域中使用的基于神经网络的攻击探测器对中毒更强大,例如恶意软件检测和图像处理。

In recent years, a variety of effective neural network-based methods for anomaly and cyber attack detection in industrial control systems (ICSs) have been demonstrated in the literature. Given their successful implementation and widespread use, there is a need to study adversarial attacks on such detection methods to better protect the systems that depend upon them. The extensive research performed on adversarial attacks on image and malware classification has little relevance to the physical system state prediction domain, which most of the ICS attack detection systems belong to. Moreover, such detection systems are typically retrained using new data collected from the monitored system, thus the threat of adversarial data poisoning is significant, however this threat has not yet been addressed by the research community. In this paper, we present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors. We propose two algorithms for generating poison samples, an interpolation-based algorithm and a back-gradient optimization-based algorithm, which we evaluate on both synthetic and real-world ICS data. We demonstrate that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector, however the ability to poison the detector is limited to a small set of attack types and magnitudes. When the poison-generating algorithms are applied to the popular SWaT dataset, we show that the autoencoder detector trained on the physical system state data is resilient to poisoning in the face of all ten of the relevant attacks in the dataset. This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains, such as malware detection and image processing.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源