论文标题
Pixel2mesh ++:从多视图图像中生成3D网格
Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images
论文作者
论文摘要
我们从带有或没有相机姿势的少量颜色图像中研究3D网格表示形状产生的问题。尽管许多以前的作品学会直接从先验中幻觉形状,但我们通过使用图形卷积网络利用跨视图信息来进一步提高形状质量。我们的模型没有构建从图像到3D形状的直接映射函数,而是学会了预测一系列变形,以改善粗糙的形状。受传统多视图几何方法的启发,我们的网络样本附近的区域附近的最初网格顶点位置附近的区域,原因是使用由多个输入图像构建的感知特征统计信息的最佳变形。广泛的实验表明,我们的模型会产生准确的3D形状,这些形状不仅在输入角度从视觉上是合理的,而且还与任意观点保持一致。借助物理驱动的体系结构,我们的模型还表现出跨不同语义类别的概括能力以及输入图像的数量。模型分析实验表明,我们的模型对初始网格的质量和相机姿势的误差具有鲁棒性,并且可以与可区分的渲染器结合使用,以进行测试时间优化。
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses. While many previous works learn to hallucinate the shape directly from priors, we adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network. Instead of building a direct mapping function from images to 3D shape, our model learns to predict series of deformations to improve a coarse shape iteratively. Inspired by traditional multiple view geometry methods, our network samples nearby area around the initial mesh's vertex locations and reasons an optimal deformation using perceptual feature statistics built from multiple input images. Extensive experiments show that our model produces accurate 3D shapes that are not only visually plausible from the input perspectives, but also well aligned to arbitrary viewpoints. With the help of physically driven architecture, our model also exhibits generalization capability across different semantic categories, and the number of input images. Model analysis experiments show that our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable renderer for test-time optimization.