论文标题

隐藏的足迹:从3D人类步道学习上下文步行性

Hidden Footprints: Learning Contextual Walkability from 3D Human Trails

论文作者

Sun, Jin, Averbuch-Elor, Hadar, Wang, Qianqian, Snavely, Noah

论文摘要

预测人们可以在场景中行走的地方对于许多任务,包括自主驾驶系统和人类行为分析很重要。然而,由于语义歧义和缺乏标记的数据,学习一个目的的计算模型是具有挑战性的:当前的数据集只能告诉您人们在哪里,而不是在哪里。我们通过利用现有数据集的信息来解决此问题,而无需其他标签。我们首先通过传播图像之间的人观察来扩大有效的,标记的步行区域,并利用3D信息创建我们称为“隐藏的足迹”。但是,此增强数据仍然很少。我们制定了一种专为这种稀疏标签而设计的培训策略,将级别平衡的分类损失与上下文对抗性损失相结合。使用此策略,我们演示了一个模型,该模型学会了从单个图像中预测步行性图。我们在Waymo和CityScapes数据集上评估了我们的模型,与基线和最先进的模型相比,表现出卓越的性能。

Predicting where people can walk in a scene is important for many tasks, including autonomous driving systems and human behavior analysis. Yet learning a computational model for this purpose is challenging due to semantic ambiguity and a lack of labeled data: current datasets only tell you where people are, not where they could be. We tackle this problem by leveraging information from existing datasets, without additional labeling. We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints. However, this augmented data is still sparse. We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss. Using this strategy, we demonstrate a model that learns to predict a walkability map from a single image. We evaluate our model on the Waymo and Cityscapes datasets, demonstrating superior performance compared to baselines and state-of-the-art models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源