论文标题

边缘视频分析的任务沟通

Task-Oriented Communication for Edge Video Analytics

论文作者

Shao, Jiawei, Zhang, Xinjie, Zhang, Jun

论文摘要

随着人工智能(AI)技术的开发以及配备摄像头设备的日益普及,许多Edge Video Analytics应用程序都在出现,呼吁在网络边缘部署计算密集型AI模型。边缘推理是一种有前途的解决方案,可以将计算密集型工作负载从低端设备移动到功能强大的边缘服务器进行视频分析,但是由于带宽有限,设备服务器通信将仍然是瓶颈。本文提出了一个面向任务的通信框架,用于Edge Video Analytics,其中多个设备收集视觉感官数据并将信息功能传输到Edge服务器以进行处理。为了启用低延迟推断,该框架删除了空间和时间域中的视频冗余,并传输最小信息,这对于下游任务至关重要,而不是重新构建Edge Server上的视频。具体而言,它根据确定性信息瓶颈(IB)原理提取紧凑的任务与任务相关的功能,该功能是特征在功能的信息和通信成本之间的权衡。由于连续帧的特征在时间上相关,因此我们提出了一个时间熵模型(TEM),以通过将先前的特征作为功能编码中的附带信息来降低比特率。为了进一步提高推理性能,我们在服务器上构建了一个空间融合模块,以集成当前和以前的帧的功能以进行关节推断。在视频分析任务上进行的广泛实验证明,所提出的框架有效地编码了与任务相关的视频数据信息,并实现了比现有方法更好的绩效折衷。

With the development of artificial intelligence (AI) techniques and the increasing popularity of camera-equipped devices, many edge video analytics applications are emerging, calling for the deployment of computation-intensive AI models at the network edge. Edge inference is a promising solution to move the computation-intensive workloads from low-end devices to a powerful edge server for video analytics, but the device-server communications will remain a bottleneck due to the limited bandwidth. This paper proposes a task-oriented communication framework for edge video analytics, where multiple devices collect the visual sensory data and transmit the informative features to an edge server for processing. To enable low-latency inference, this framework removes video redundancy in spatial and temporal domains and transmits minimal information that is essential for the downstream task, rather than reconstructing the videos at the edge server. Specifically, it extracts compact task-relevant features based on the deterministic information bottleneck (IB) principle, which characterizes a tradeoff between the informativeness of the features and the communication cost. As the features of consecutive frames are temporally correlated, we propose a temporal entropy model (TEM) to reduce the bitrate by taking the previous features as side information in feature encoding. To further improve the inference performance, we build a spatial-temporal fusion module at the server to integrate features of the current and previous frames for joint inference. Extensive experiments on video analytics tasks evidence that the proposed framework effectively encodes task-relevant information of video data and achieves a better rate-performance tradeoff than existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源