论文标题
LIDARCAP:无远距离标记的3D人类运动捕获与LiDar Point Clouds
LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR Point Clouds
论文作者
论文摘要
现有的运动捕获数据集在很大程度上是短距离的,尚不适合对远程应用程序的需求。我们提出了LIDARHUMAN26M,这是Lidar捕获的一种新的人类运动捕获数据集,以更长的范围来克服这一限制。我们的数据集还包括IMU系统获得的地面真相人类动作和同步的RGB图像。我们进一步提出了一种强大的基线方法,即LidarCap,用于LIDAR POINT云人类运动捕获。具体而言,我们首先利用PointNet ++编码点的特征,然后采用逆运动求解器和SMPL优化器通过汇总层次结构的汇总特征来回归姿势。定量和定性实验表明,我们的方法仅优于基于RGB图像的技术。消融实验表明,我们的数据集具有挑战性,值得进一步研究。最后,Kitti数据集和Waymo打开数据集上的实验表明,我们的方法可以推广到不同的LIDAR传感器设置。
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications. We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation. Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images. We further present a strong baseline method, LiDARCap, for LiDAR point cloud human motion capture. Specifically, we first utilize PointNet++ to encode features of points and then employ the inverse kinematics solver and SMPL optimizer to regress the pose through aggregating the temporally encoded features hierarchically. Quantitative and qualitative experiments show that our method outperforms the techniques based only on RGB images. Ablation experiments demonstrate that our dataset is challenging and worthy of further research. Finally, the experiments on the KITTI Dataset and the Waymo Open Dataset show that our method can be generalized to different LiDAR sensor settings.