论文标题

PointTrack ++用于有效的在线多对象跟踪和细分

PointTrack++ for Effective Online Multi-Object Tracking and Segmentation

论文作者

Xu, Zhenbo, Zhang, Wei, Tan, Xiao, Yang, Wei, Su, Xiangbo, Yuan, Yuchen, Zhang, Hongwu, Wen, Shilei, Ding, Errui, Huang, Liusheng

论文摘要

多对象跟踪和分割(MOTS)是一项新型的计算机视觉任务,旨在共同执行多个对象跟踪(MOT)和实例分割。在这项工作中,我们介绍了PointTrack ++,这是MOT的有效在线框架,它显着扩展了我们最近提出的PointTrack框架。首先,PointTrack采用了一个有效的一阶段框架进行示例,并通过将紧凑的图像表示形式转换为未订购的2D点云来学习实例嵌入。与PointTrack相比,我们提出的PointTrack ++提供了三个重大改进。首先,在实例细分阶段,我们采用了一个具有焦点损失的语义分割解码器,以提高实例选择质量。其次,为了进一步提高细分性能,我们通过副本和播放实例提出了数据增强策略,以训练图像。最后,我们在实例协会阶段中引入了更好的培训策略,以提高学习实例嵌入的区分性。由此产生的框架在第5 BMTT Motchallenge上实现了最新的性能。

Multiple-object tracking and segmentation (MOTS) is a novel computer vision task that aims to jointly perform multiple object tracking (MOT) and instance segmentation. In this work, we present PointTrack++, an effective on-line framework for MOTS, which remarkably extends our recently proposed PointTrack framework. To begin with, PointTrack adopts an efficient one-stage framework for instance segmentation, and learns instance embeddings by converting compact image representations to un-ordered 2D point cloud. Compared with PointTrack, our proposed PointTrack++ offers three major improvements. Firstly, in the instance segmentation stage, we adopt a semantic segmentation decoder trained with focal loss to improve the instance selection quality. Secondly, to further boost the segmentation performance, we propose a data augmentation strategy by copy-and-paste instances into training images. Finally, we introduce a better training strategy in the instance association stage to improve the distinguishability of learned instance embeddings. The resulting framework achieves the state-of-the-art performance on the 5th BMTT MOTChallenge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源