论文标题

K-Neart最多代理的深入强化学习,用于与代理数量可变的协作任务

K-nearest Multi-agent Deep Reinforcement Learning for Collaborative Tasks with a Variable Number of Agents

论文作者

Khorasgani, Hamed, Wang, Haiyan, Tang, Hsiu-Khuern, Gupta, Chetan

论文摘要

传统上,在我们经常有固定数量的代理商的游戏环境中,演示和验证了多代理深钢筋学习算法的性能。在许多工业应用中,可用代理的数量可以在任何给定的一天发生变化,即使提前知道代理商的数量,代理在操作过程中打破并在一段时间内无法使用也很常见。在本文中,我们为具有可变数量的代理的多代理协作任务提出了一种新的深入增强学习算法。我们使用日立开发的车队管理模拟器在生产地点中生成现实的场景来证明我们的算法的应用。

Traditionally, the performance of multi-agent deep reinforcement learning algorithms are demonstrated and validated in gaming environments where we often have a fixed number of agents. In many industrial applications, the number of available agents can change at any given day and even when the number of agents is known ahead of time, it is common for an agent to break during the operation and become unavailable for a period of time. In this paper, we propose a new deep reinforcement learning algorithm for multi-agent collaborative tasks with a variable number of agents. We demonstrate the application of our algorithm using a fleet management simulator developed by Hitachi to generate realistic scenarios in a production site.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源