论文标题

重新识别的多域联合培训

Multi-Domain Joint Training for Person Re-Identification

论文作者

Yang, Lu, Liu, Lingqiao, Wang, Yunlong, Wang, Peng, Zhang, Yanning

论文摘要

基于深度学习的人重新识别(REID)通常需要大量的培训数据才能实现良好的表现。因此,从不同环境中收集更多培训数据往往会提高REID的性能。本文重新审查了这种普遍的信念,并有点令人惊讶的是:使用更多的样本,即使用来自多个数据集的样本进行培训,并不一定会通过使用流行的REID模型来提高性能。在某些情况下,在其中一个数据集中进行了更多样本培训,甚至可能损害评估的性能。我们假设这种现象是由于标准网络在适应不同环境中的无能。为了克服此问题,我们提出了一种称为域类样本的动态网络(DCSD)的方法,其参数可以适应各种因素。具体而言,我们考虑可以从输入功能和外部域相关因素(例如域信息或相机信息)中识别的内部域相关因素。我们的发现是,使用这种自适应模型的培训可以从更多的培训样本中更好地受益。实验结果表明,在多个数据集中的联合培训时,我们的DCSD可以大大提高性能(最高12.3%)。

Deep learning-based person Re-IDentification (ReID) often requires a large amount of training data to achieve good performance. Thus it appears that collecting more training data from diverse environments tends to improve the ReID performance. This paper re-examines this common belief and makes a somehow surprising observation: using more samples, i.e., training with samples from multiple datasets, does not necessarily lead to better performance by using the popular ReID models. In some cases, training with more samples may even hurt the performance of the evaluation is carried out in one of those datasets. We postulate that this phenomenon is due to the incapability of the standard network in adapting to diverse environments. To overcome this issue, we propose an approach called Domain-Camera-Sample Dynamic network (DCSD) whose parameters can be adaptive to various factors. Specifically, we consider the internal domain-related factor that can be identified from the input features, and external domain-related factors, such as domain information or camera information. Our discovery is that training with such an adaptive model can better benefit from more training samples. Experimental results show that our DCSD can greatly boost the performance (up to 12.3%) while joint training in multiple datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源