论文标题

VCNET:一种强大的盲图图像插图的方​​法

VCNet: A Robust Approach to Blind Image Inpainting

论文作者

Wang, Yi, Chen, Ying-Cong, Tao, Xin, Jia, Jiaya

论文摘要

盲目介绍是自动完成视觉内容的任务,而无需指定图像中缺少区域的掩码。先前的作品假设已知缺少区域模式,从而限制了其应用程序范围。在本文中,我们通过定义新的盲目介绍环境来放松假设,从而使训练成为盲目的神经系统与各种未知的缺失区域模式的稳健性。具体来说,我们提出了一个两阶段的视觉一致性网络(VCN),旨在估算在哪里填充(通过掩码)并生成要填充的内容。在此过程中,不可避免的潜在掩膜预测误差会导致随后的修复中严重的伪影。为了解决这个问题,我们的VCN首先预测语义上不一致的区域,从而使蒙版预测更加易于处理。然后,它使用新的空间归一化来修复这些估计的丢失区域,从而使VCN能够对掩码的预测误差进行鲁棒性。这样,就产生了具有说服力和视觉吸引人的内容的内容。进行了广泛的实验,这表明我们的方法在盲图中有效且健壮。我们的VCN允许广泛应用。

Blind inpainting is a task to automatically complete visual contents without specifying masks for missing areas in an image. Previous works assume missing region patterns are known, limiting its application scope. In this paper, we relax the assumption by defining a new blind inpainting setting, making training a blind inpainting neural system robust against various unknown missing region patterns. Specifically, we propose a two-stage visual consistency network (VCN), meant to estimate where to fill (via masks) and generate what to fill. In this procedure, the unavoidable potential mask prediction errors lead to severe artifacts in the subsequent repairing. To address it, our VCN predicts semantically inconsistent regions first, making mask prediction more tractable. Then it repairs these estimated missing regions using a new spatial normalization, enabling VCN to be robust to the mask prediction errors. In this way, semantically convincing and visually compelling content is thus generated. Extensive experiments are conducted, showing our method is effective and robust in blind image inpainting. And our VCN allows for a wide spectrum of applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源