论文标题

VisionNet:用于自动驾驶的基于可驱动空间的交互式运动预测网络

VisionNet: A Drivable-space-based Interactive Motion Prediction Network for Autonomous Driving

论文作者

Zhu, Yanliang, Qian, Deheng, Ren, Dongchun, Xia, Huaxia

论文摘要

对环境交通状况的理解可确保自动驾驶汽车的驾驶安全性。最近,大量研究对该任务进行了调查,而由于在复杂的情况下集体影响力的限制,很难得到很好的解决。这些方法通过目标障碍与其邻居之间的空间关系对相互作用进行建模。但是,由于互动的训练阶段缺乏有效的监督,因此他们过分简化了挑战。结果,这些模型远非有希望的。更直观地,我们将问题转化为计算相互作用的可驱动空间,并提出基于CNN的VisionNet进行轨迹预测。视网网接受一系列运动状态,即位置,速度和加速度,以估计未来可驱动的空间。复杂的相互作用显着提高了视网的解释能力并完善预测。为了进一步提高性能,我们提出了交互式损失,以指导可驱动空间的产生。多个公共数据集的实验证明了拟议的视网的有效性。

The comprehension of environmental traffic situation largely ensures the driving safety of autonomous vehicles. Recently, the mission has been investigated by plenty of researches, while it is hard to be well addressed due to the limitation of collective influence in complex scenarios. These approaches model the interactions through the spatial relations between the target obstacle and its neighbors. However, they oversimplify the challenge since the training stage of the interactions lacks effective supervision. As a result, these models are far from promising. More intuitively, we transform the problem into calculating the interaction-aware drivable spaces and propose the CNN-based VisionNet for trajectory prediction. The VisionNet accepts a sequence of motion states, i.e., location, velocity, and acceleration, to estimate the future drivable spaces. The reified interactions significantly increase the interpretation ability of the VisionNet and refine the prediction. To further advance the performance, we propose an interactive loss to guide the generation of the drivable spaces. Experiments on multiple public datasets demonstrate the effectiveness of the proposed VisionNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源