论文标题
无人用
Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm Networks
论文作者
论文摘要
第五世代和第六代无线通信网络正在启用工具,例如物联网设备,无人驾驶汽车(UAV)和人工智能,以使用设备网络来改善农业景观,以自动监视农田。对大面积进行调查需要在特定时间段内执行许多图像分类任务,以防止发生事件的情况,例如火灾或洪水。无人机具有有限的能量和计算能力,并且可能无法在本地和适当的时间内执行所有激烈的图像分类任务。因此,假定无人机能够部分将其工作量卸载到附近的多访问边缘计算设备。无人机需要一种决策算法,该算法将决定将执行任务的位置,同时还考虑网络中其他无人机的时间限制和能量级别。在本文中,我们介绍了一种深入的Q学习方法(DQL)来解决这个多目标问题。将所提出的方法与Q学习和三个启发式基线进行了比较,模拟结果表明,我们提出的基于DQL的方法在涉及无人机的剩余电池电量和违规截止日期的百分比时取得了可比的结果。此外,我们的方法能够比Q学习的速度快13倍。
The fifth and sixth generations of wireless communication networks are enabling tools such as internet of things devices, unmanned aerial vehicles (UAVs), and artificial intelligence, to improve the agricultural landscape using a network of devices to automatically monitor farmlands. Surveying a large area requires performing a lot of image classification tasks within a specific period of time in order to prevent damage to the farm in case of an incident, such as fire or flood. UAVs have limited energy and computing power, and may not be able to perform all of the intense image classification tasks locally and within an appropriate amount of time. Hence, it is assumed that the UAVs are able to partially offload their workload to nearby multi-access edge computing devices. The UAVs need a decision-making algorithm that will decide where the tasks will be performed, while also considering the time constraints and energy level of the other UAVs in the network. In this paper, we introduce a Deep Q-Learning (DQL) approach to solve this multi-objective problem. The proposed method is compared with Q-Learning and three heuristic baselines, and the simulation results show that our proposed DQL-based method achieves comparable results when it comes to the UAVs' remaining battery levels and percentage of deadline violations. In addition, our method is able to reach convergence 13 times faster than Q-Learning.