论文标题
使用恶意机器人利用信任进行弹性假设测试
Exploiting Trust for Resilient Hypothesis Testing with Malicious Robots
论文作者
论文摘要
我们为对抗性多机器人群众跨任务中的决策制定开发了一个有弹性的二进制假设测试框架。该框架利用机器人之间的随机信任观察,即使在i)网络中存在恶意机器人,并且它们的数量可能大于合法机器人的数量,并且ii)FC使用FC使用FC的数量大。我们得出两种算法来实现这一目标。第一个是两个阶段方法(2SA),该方法基于收到的信任观察来估算机器人的合法性,并证明在最严重的恶意攻击中可以最大程度地减少检测错误的可能性。在这里,恶意机器人的比例是已知但任意的。对于不明的恶意机器人,我们开发了对抗性的广义类似比测试(A-GLRT),该测试(A-GLRT)使用报告的机器人测量和信任观察结果来估计机器人的可信度,其报告策略的信任度,以及同时的正确假设。我们利用特殊的问题结构表明,尽管有几个未知的问题参数,这种方法仍然可以计算处理。我们在硬件实验中部署了这两种算法,其中一组机器人会在模拟道路网络上进行交通状况的人群,但要经过SYBIL攻击,与Google Maps相似。我们从实际通信信号中提取每个机器人的信任观察,这些信号提供有关发件人独特性的统计信息。我们表明,即使恶意机器人在大多数人中,FC也可以将检测误差的可能性降低到2SA和A-GLRT的30.5%和29%。
We develop a resilient binary hypothesis testing framework for decision making in adversarial multi-robot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized Fusion Center (FC) even when i) there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and ii) the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the Two Stage Approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. Here, the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the Adversarial Generalized Likelihood Ratio Test (A-GLRT) that uses both the reported robot measurements and trust observations to estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis simultaneously. We exploit special problem structure to show that this approach remains computationally tractable despite several unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions on a mock-up road network similar in spirit to Google Maps, subject to a Sybil attack. We extract the trust observations for each robot from actual communication signals which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT respectively.