论文标题
部分可观测时空混沌系统的无模型预测
Efficient divide-and-conquer registration of UAV and ground LiDAR point clouds through canopy shape context
论文作者
论文摘要
森林中无人驾驶飞机激光扫描(ULS)和地面光检测和范围(LIDAR)点云的注册对于创建森林结构的详细表示和准确的森林参数反演至关重要。但是,森林遮挡对基于标记的注册方法构成了挑战,并且由于对象的过程(例如,树,冠)的分割过程,某些无标记的自动登记方法具有较低的效率。因此,我们使用划分和纠纷策略,并提出一种自动化和有效的方法来注册森林中的UL和地面激光点云。登记涉及粗对齐和精细的注册,其中点云的粗对齐分为垂直和水平对齐。垂直对齐是通过接地比对实现的,这是通过接地云的正常向量和水平平面之间的转换关系实现的,并且通过冠层投影图像匹配来实现水平对齐。在图像匹配过程中,植被点首先由地面滤波算法区分,然后将植被点投射到水平面上以获得两个二元图像。为了匹配这两个图像,基于冠层形状上下文特征使用了匹配策略,该特征由两点一致的集合和顶篷重叠描述。最后,我们通过结合地面比对和图像匹配的结果并完成良好的注册来实现ULS和地面激光雷达数据集的粗略对齐。同样,森林图的现场测量结果证明了所提出方法的有效性,准确性和效率。实验结果表明,不同图中的ULS和地面LIDAR数据已注册,其中水平比对误差小于0.02 m,而所提出方法的平均运行时间小于1秒。
Registration of unmanned aerial vehicle laser scanning (ULS) and ground light detection and ranging (LiDAR) point clouds in forests is critical to create a detailed representation of a forest structure and an accurate inversion of forest parameters. However, forest occlusion poses challenges for marker-based registration methods, and some marker-free automated registration methods have low efficiency due to the process of object (e.g., tree, crown) segmentation. Therefore, we use a divide-and-conquer strategy and propose an automated and efficient method to register ULS and ground LiDAR point clouds in forests. Registration involves coarse alignment and fine registration, where the coarse alignment of point clouds is divided into vertical and horizontal alignment. The vertical alignment is achieved by ground alignment, which is achieved by the transformation relationship between normal vectors of the ground point cloud and the horizontal plane, and the horizontal alignment is achieved by canopy projection image matching. During image matching, vegetation points are first distinguished by the ground filtering algorithm, and then, vegetation points are projected onto the horizontal plane to obtain two binary images. To match the two images, a matching strategy is used based on canopy shape context features, which are described by a two-point congruent set and canopy overlap. Finally, we implement coarse alignment of ULS and ground LiDAR datasets by combining the results of ground alignment and image matching and finish fine registration. Also, the effectiveness, accuracy, and efficiency of the proposed method are demonstrated by field measurements of forest plots. Experimental results show that the ULS and ground LiDAR data in different plots are registered, of which the horizontal alignment errors are less than 0.02 m, and the average runtime of the proposed method is less than 1 second.