论文标题
突出显示:突出显示实时无人机跟踪的低光潜在功能
HighlightNet: Highlighting Low-Light Potential Features for Real-Time UAV Tracking
论文作者
论文摘要
即使在最新的(SOTA)跟踪器中,由于潜在的图像特征在不利的光条件下很难提取,因此低光环境即使在最新的(SOTA)跟踪器中也对坚固的无人驾驶汽车(UAV)进行了巨大挑战。此外,由于可见性较低,人类监视器的准确在线选择也极为难以在地面控制站中初始化无人机跟踪。为了解决这些问题,这项工作提出了一种新颖的增强剂,即凸线网,以点燃人类操作员和无人机跟踪器的潜在对象。通过使用变压器,光线网可以根据全局特征调整增强参数,因此可以适应照明变化。引入了像素级范围掩模,以使光明网络更加专注于没有光源的跟踪对象和区域的增强。此外,建立了一种软截断机制,以防止背景噪声被误认为至关重要的特征。对图像增强基准测试的评估表明,光明网络在促进人类的看法方面具有优势。公共Uavdark135基准进行的实验表明,HightlightNet比其他SOTA低光增强剂更适合无人机跟踪任务。此外,在典型的无人机平台上进行的现实测试验证了HightlightNet在夜间航空跟踪相关的应用中的实用性和效率。代码和演示视频可在https://github.com/vision4robotics/highlightnet上找到。
Low-light environments have posed a formidable challenge for robust unmanned aerial vehicle (UAV) tracking even with state-of-the-art (SOTA) trackers since the potential image features are hard to extract under adverse light conditions. Besides, due to the low visibility, accurate online selection of the object also becomes extremely difficult for human monitors to initialize UAV tracking in ground control stations. To solve these problems, this work proposes a novel enhancer, i.e., HighlightNet, to light up potential objects for both human operators and UAV trackers. By employing Transformer, HighlightNet can adjust enhancement parameters according to global features and is thus adaptive for the illumination variation. Pixel-level range mask is introduced to make HighlightNet more focused on the enhancement of the tracking object and regions without light sources. Furthermore, a soft truncation mechanism is built to prevent background noise from being mistaken for crucial features. Evaluations on image enhancement benchmarks demonstrate HighlightNet has advantages in facilitating human perception. Experiments on the public UAVDark135 benchmark show that HightlightNet is more suitable for UAV tracking tasks than other SOTA low-light enhancers. In addition, real-world tests on a typical UAV platform verify HightlightNet's practicability and efficiency in nighttime aerial tracking-related applications. The code and demo videos are available at https://github.com/vision4robotics/HighlightNet.