论文标题
Simaug:从模拟中学习轨迹预测的强大表示形式
SimAug: Learning Robust Representations from Simulation for Trajectory Prediction
论文作者
论文摘要
本文研究了在新的场景和观点看不见的相机中预测人们未来轨迹的问题。我们通过无数据的设置解决了这个问题,在该设置中,仅在3D仿真数据上对模型进行训练,并将其应用于各种真实摄像机。我们提出了一种新颖的方法来通过增强模拟训练数据来学习鲁棒表示形式,从而使表示形式可以更好地概括为看不见的现实世界测试数据。关键的想法是将最难的相机视图的功能与原始视图的对抗性功能混合在一起。我们将我们的方法称为Simaug。我们表明,Simaug使用零实际培训数据以及Stanford无人机中的最新性能以及使用内域培训数据时,在三个现实世界的基准和最先进的性能上取得了令人鼓舞的结果。
This paper studies the problem of predicting future trajectories of people in unseen cameras of novel scenarios and views. We approach this problem through the real-data-free setting in which the model is trained only on 3D simulation data and applied out-of-the-box to a wide variety of real cameras. We propose a novel approach to learn robust representation through augmenting the simulation training data such that the representation can better generalize to unseen real-world test data. The key idea is to mix the feature of the hardest camera view with the adversarial feature of the original view. We refer to our method as SimAug. We show that SimAug achieves promising results on three real-world benchmarks using zero real training data, and state-of-the-art performance in the Stanford Drone and the VIRAT/ActEV dataset when using in-domain training data.