论文标题

Rellis-3D数据集:数据,基准和分析

RELLIS-3D Dataset: Data, Benchmarks and Analysis

论文作者

Jiang, Peng, Osteen, Philip, Wigness, Maggie, Saripalli, Srikanth

论文摘要

语义场景的理解对于稳健且安全的自动导航至关重要,尤其是在越野环境中。 3D语义细分的最新深度学习进展在很大程度上取决于大量培训数据,但是现有的自治数据集代表城市环境或缺乏多模式的越野数据。我们用Rellis-3D(一种在越野环境中收集的多模式数据集)填补了这一空白,其中包含13,556张LIDAR扫描和6,235张图像的注释。数据是在德克萨斯州A \&M大学的Rellis校园中收集的,并对与阶级失衡和环境地形相关的现有算法提出了挑战。此外,我们评估了该数据集中当前的最新深度学习语义分割模型。实验结果表明,Rellis-3D对在城市环境中进行分割的算法提出了挑战。这个新颖的数据集为研究人员提供了继续开发更先进的算法并研究新的研究方向以增强自主航行的新算法所需的资源。 Rellis-3D可从https://github.com/unmannedlab/rellis-3d获得

Semantic scene understanding is crucial for robust and safe autonomous navigation, particularly so in off-road environments. Recent deep learning advances for 3D semantic segmentation rely heavily on large sets of training data, however existing autonomy datasets either represent urban environments or lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal dataset collected in an off-road environment, which contains annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A\&M University and presents challenges to existing algorithms related to class imbalance and environmental topography. Additionally, we evaluate the current state-of-the-art deep learning semantic segmentation models on this dataset. Experimental results show that RELLIS-3D presents challenges for algorithms designed for segmentation in urban environments. This novel dataset provides the resources needed by researchers to continue to develop more advanced algorithms and investigate new research directions to enhance autonomous navigation in off-road environments. RELLIS-3D is available at https://github.com/unmannedlab/RELLIS-3D

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源