论文标题

HPER:RGB和LIDAR的3D人姿势估计

HPERL: 3D Human Pose Estimation from RGB and LiDAR

论文作者

Fürst, Michael, Gupta, Shriya T. P., Schuster, René, Wasenmüller, Oliver, Stricker, Didier

论文摘要

野生姿势估计在各个领域具有巨大的潜力,从动画和行动识别到意图认识和自主驾驶的预测。当前的最新面积仅集中在预测3D人姿势的RGB和RGB-D方法上。但是,不使用精确的LiDAR深度信息会限制性能,并导致绝对姿势估计非常不准确。随着LIDAR传感器在机器人和自动驾驶汽车设置上变得越来越负担得起和常见,我们建议使用RGB和LIDAR进行端到端架构,以预测具有前所未有的精度的绝对3D人姿势。此外,我们引入了一种弱监督的方法,使用PEDX [1]的2D姿势注释生成3D预测。这允许在3D人类姿势估计领域中提供许多新的机会。

In-the-wild human pose estimation has a huge potential for various fields, ranging from animation and action recognition to intention recognition and prediction for autonomous driving. The current state-of-the-art is focused only on RGB and RGB-D approaches for predicting the 3D human pose. However, not using precise LiDAR depth information limits the performance and leads to very inaccurate absolute pose estimation. With LiDAR sensors becoming more affordable and common on robots and autonomous vehicle setups, we propose an end-to-end architecture using RGB and LiDAR to predict the absolute 3D human pose with unprecedented precision. Additionally, we introduce a weakly-supervised approach to generate 3D predictions using 2D pose annotations from PedX [1]. This allows for many new opportunities in the field of 3D human pose estimation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源