论文标题

SketchDESC:学习多视图通信的本地素描描述符

SketchDesc: Learning Local Sketch Descriptors for Multi-view Correspondence

论文作者

Yu, Deng, Li, Lei, Zheng, Youyi, Lau, Manfred, Song, Yi-Zhe, Tai, Chiew-Lan, Fu, Hongbo

论文摘要

在本文中,我们研究了多视图草图对应关系的问题,在其中,我们将其作为输入多个徒手草图,具有相同对象的不同视图,并预测草图之间的语义对应关系。这个问题是具有挑战性的,因为不同视图处相应点的视觉特征可能会大不相同。为此,我们采用了一种深度学习的方法,并从数据中学习了一个新颖的本地素描描述符。我们通过生成由3D形状合成的多视图线图的像素级对应关系来贡献训练数据集。为了处理草图的稀疏性和歧义,我们设计了一个新颖的多分支神经网络,该网络集成了基于贴片的表示形式和多尺度策略,以学习多视图中的像素级对应关系。我们通过在手绘草图和从多个3D形状数据集呈现的多视图线图上进行了广泛的实验来证明我们提出的方法的有效性。

In this paper, we study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object and predict as output the semantic correspondence among the sketches. This problem is challenging since the visual features of corresponding points at different views can be very different. To this end, we take a deep learning approach and learn a novel local sketch descriptor from data. We contribute a training dataset by generating the pixel-level correspondence for the multi-view line drawings synthesized from 3D shapes. To handle the sparsity and ambiguity of sketches, we design a novel multi-branch neural network that integrates a patch-based representation and a multi-scale strategy to learn the pixel-level correspondence among multi-view sketches. We demonstrate the effectiveness of our proposed approach with extensive experiments on hand-drawn sketches and multi-view line drawings rendered from multiple 3D shape datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源