论文标题

Depthgan:基于gan的室内场景深度生成语义布局

DepthGAN: GAN-based Depth Generation of Indoor Scenes from Semantic Layouts

论文作者

Li, Yidi, Wang, Yiqun, Lu, Zhengda, Xiao, Jun

论文摘要

受计算效率和准确性的限制,生成复杂的3D场景对于现有发电网络来说仍然是一个具有挑战性的问题。在这项工作中,我们提出了Depthgan,这是一种新型的方法,即仅将语义布局作为输入生成深度图。首先,我们引入了精心设计的变压器块的级联反应,以捕获深度图中的结构相关性,从而在全球特征聚集和局部关注之间取得平衡。同时,我们提出了一个跨意义融合模块,以在深度生成中有效地指导边缘保存,从而利用了其他外观监督信息。最后,我们对结构3D Panorama数据集的透视图进行了广泛的实验,并证明我们的Depthgan在深度生成任务中的定量结果和视觉效果都取得了卓越的性能。Furthermore,Furthermore,3D室内场景可以由我们所产生的具有合理的结构和空间固定的生成的深度图来重建。

Limited by the computational efficiency and accuracy, generating complex 3D scenes remains a challenging problem for existing generation networks. In this work, we propose DepthGAN, a novel method of generating depth maps with only semantic layouts as input. First, we introduce a well-designed cascade of transformer blocks as our generator to capture the structural correlations in depth maps, which makes a balance between global feature aggregation and local attention. Meanwhile, we propose a cross-attention fusion module to guide edge preservation efficiently in depth generation, which exploits additional appearance supervision information. Finally, we conduct extensive experiments on the perspective views of the Structured3d panorama dataset and demonstrate that our DepthGAN achieves superior performance both on quantitative results and visual effects in the depth generation task.Furthermore, 3D indoor scenes can be reconstructed by our generated depth maps with reasonable structure and spatial coherency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源