论文标题
通过无模型的强化学习来确保防止隐身攻击的计划
Secure Planning Against Stealthy Attacks via Model-Free Reinforcement Learning
论文作者
论文摘要
在机器人的控制信号(即执行器)攻击的情况下,我们考虑了在未知随机环境中的安全感知计划的问题。我们将攻击者建模为具有控制器以及使用的入侵检测系统的代理商,并希望阻止控制器在保持隐形时执行任务。我们将问题提出为攻击者和控制器之间的随机游戏,并提出了一种表达该代理和控制器作为联合线性时间逻辑(LTL)公式的目标的方法。然后,我们表明,在环境完全未知时,可以通过无模型的加固学习来解决计划问题,即正式描述为满足随机游戏中LTL公式的问题。最后,我们对两个机器人计划案例研究进行了说明和评估我们的方法。
We consider the problem of security-aware planning in an unknown stochastic environment, in the presence of attacks on control signals (i.e., actuators) of the robot. We model the attacker as an agent who has the full knowledge of the controller as well as the employed intrusion-detection system and who wants to prevent the controller from performing tasks while staying stealthy. We formulate the problem as a stochastic game between the attacker and the controller and present an approach to express the objective of such an agent and the controller as a combined linear temporal logic (LTL) formula. We then show that the planning problem, described formally as the problem of satisfying an LTL formula in a stochastic game, can be solved via model-free reinforcement learning when the environment is completely unknown. Finally, we illustrate and evaluate our methods on two robotic planning case studies.