论文标题
在O-Ran切片中,联合的深入增强学习资源分配
Federated Deep Reinforcement Learning for Resource Allocation in O-RAN Slicing
论文作者
论文摘要
最近,开放的无线电访问网络(O-RAN)已成为一项有前途的技术,可以为网络供应商和运营商提供开放环境。协调X申请(XAPP)对于提高灵活性和确保O-RAN的高整体网络性能至关重要。同时,已经提出了联合加强学习,作为一种有希望的技术,以增强分布式强化学习者之间的协作并提高学习效率。在本文中,我们提出了一种联合的深钢筋学习算法,以协调O-Ran中的多个独立XAPP进行网络切片。我们设计了两个XAPP,即功率控制XAPP和一个基于切片的资源分配XAPP,我们使用联合学习模型来协调两个XAPP代理,以提高学习效率并提高网络性能。与传统的深钢筋学习相比,我们提出的算法可以实现增强的移动宽带(EMBB)切片的吞吐量11%,超可靠性低延迟通信(URLLC)切片的延迟降低了33%。
Recently, open radio access network (O-RAN) has become a promising technology to provide an open environment for network vendors and operators. Coordinating the x-applications (xAPPs) is critical to increase flexibility and guarantee high overall network performance in O-RAN. Meanwhile, federated reinforcement learning has been proposed as a promising technique to enhance the collaboration among distributed reinforcement learning agents and improve learning efficiency. In this paper, we propose a federated deep reinforcement learning algorithm to coordinate multiple independent xAPPs in O-RAN for network slicing. We design two xAPPs, namely a power control xAPP and a slice-based resource allocation xAPP, and we use a federated learning model to coordinate two xAPP agents to enhance learning efficiency and improve network performance. Compared with conventional deep reinforcement learning, our proposed algorithm can achieve 11% higher throughput for enhanced mobile broadband (eMBB) slices and 33% lower delay for ultra-reliable low-latency communication (URLLC) slices.