论文标题

PIT30M:自动驾驶汽车时代全球本地化的基准

Pit30M: A Benchmark for Global Localization in the Age of Self-Driving Cars

论文作者

Martinez, Julieta, Doubov, Sasha, Fan, Jack, Bârsan, Ioan Andrei, Wang, Shenlong, Máttyus, Gellért, Urtasun, Raquel

论文摘要

我们有兴趣了解在自动驾驶汽车的背景下,基于检索的本地化方法是否足够好。为了实现这一目标,我们引入了PIT30M,这是一个新的图像和LiDAR数据集,其框架超过3000万,比以前的工作大于10到100倍。 PIT30M在不同的条件下(即季节,天气,一天中的时间,交通)捕获,并提供准确的定位地面真相。我们还通过历史天气和天文数据以及图像和LiDAR语义分割来自动注释数据集,以作为遮挡的代理度量。我们基于图像和激光雷达检索的多种现有方法,并在此过程中引入了一种简单但有效的基于卷积网络的LIDAR检索方法,该方法与最新的状态竞争。我们的工作首次提供了基于次级检索的本地化的基准。数据集,其Python SDK以及有关传感器,校准和元数据的更多信息,请访问项目网站:https://pit30m.github.io/

We are interested in understanding whether retrieval-based localization approaches are good enough in the context of self-driving vehicles. Towards this goal, we introduce Pit30M, a new image and LiDAR dataset with over 30 million frames, which is 10 to 100 times larger than those used in previous work. Pit30M is captured under diverse conditions (i.e., season, weather, time of the day, traffic), and provides accurate localization ground truth. We also automatically annotate our dataset with historical weather and astronomical data, as well as with image and LiDAR semantic segmentation as a proxy measure for occlusion. We benchmark multiple existing methods for image and LiDAR retrieval and, in the process, introduce a simple, yet effective convolutional network-based LiDAR retrieval method that is competitive with the state of the art. Our work provides, for the first time, a benchmark for sub-metre retrieval-based localization at city scale. The dataset, its Python SDK, as well as more information about the sensors, calibration, and metadata, are available on the project website: https://pit30m.github.io/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源