论文标题
REID的Angular Triplet基于基于损失的相机网络
Angular Triplet Loss-based Camera Network for ReID
论文作者
论文摘要
人重新识别(REID)是确定行人的挑战性跨摄像机检索任务。最近提出了许多复杂的网络结构,其中许多集中在多支分支的特征上以实现高性能。但是,它们太重了,无法在Realworld应用程序中部署。此外,行人图像通常是通过不同的监视摄像机捕获的,因此各种灯光,透视和分辨率导致REID不可避免的多相机域间隙。为了解决这些问题,本文提出了ATCN,这是一个简单但有效的基于Angular三胞胎损失的相机网络,仅具有全球功能就能实现引人注目的性能。在ATCN中,引入了一种新型的角度距离,以学习嵌入空间中更具歧视性的特征表示形式。同时,轻巧的摄像头网络旨在将全局功能传递到更具歧视性功能。 ATCN设计为简单且灵活,因此可以轻松地在实践中部署。各种基准数据集的实验结果表明,ATCN的表现优于许多SOTA方法。
Person re-identification (ReID) is a challenging crosscamera retrieval task to identify pedestrians. Many complex network structures are proposed recently and many of them concentrate on multi-branch features to achieve high performance. However, they are too heavy-weight to deploy in realworld applications. Additionally, pedestrian images are often captured by different surveillance cameras, so the varied lights, perspectives and resolutions result in inevitable multi-camera domain gaps for ReID. To address these issues, this paper proposes ATCN, a simple but effective angular triplet loss-based camera network, which is able to achieve compelling performance with only global features. In ATCN, a novel angular distance is introduced to learn a more discriminative feature representation in the embedding space. Meanwhile, a lightweight camera network is designed to transfer global features to more discriminative features. ATCN is designed to be simple and flexible so it can be easily deployed in practice. The experiment results on various benchmark datasets show that ATCN outperforms many SOTA approaches.