论文标题
通过检测和降解,在深度加强学习中捍卫观察攻击
Defending Observation Attacks in Deep Reinforcement Learning via Detection and Denoising
论文作者
论文摘要
众所周知,使用深钢筋学习(DRL)训练的神经网络政策容易受到对抗攻击的影响。在本文中,我们将表现出的攻击视为外部环境管理的观察空间中的扰动。这些攻击已被证明可显着降低政策绩效。在持续控制基准的背景下,我们将注意力集中在训练有素的确定性和随机神经网络策略上,受到了四个经过精心研究的观察空间对抗性攻击。为了防止这些攻击,我们提出了使用检测和降解模式的新型防御策略。与以前的对抗训练方法在对抗场景中采样数据不同,我们的解决方案不需要在受到攻击的环境中进行采样数据,从而大大降低了训练期间的风险。详细的实验结果表明,我们的技术与最先进的对抗训练方法相媲美。
Neural network policies trained using Deep Reinforcement Learning (DRL) are well-known to be susceptible to adversarial attacks. In this paper, we consider attacks manifesting as perturbations in the observation space managed by the external environment. These attacks have been shown to downgrade policy performance significantly. We focus our attention on well-trained deterministic and stochastic neural network policies in the context of continuous control benchmarks subject to four well-studied observation space adversarial attacks. To defend against these attacks, we propose a novel defense strategy using a detect-and-denoise schema. Unlike previous adversarial training approaches that sample data in adversarial scenarios, our solution does not require sampling data in an environment under attack, thereby greatly reducing risk during training. Detailed experimental results show that our technique is comparable with state-of-the-art adversarial training approaches.