论文标题
Cocosnet V2:图像翻译的全分辨率对应学习
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation
论文作者
论文摘要
我们介绍了跨域图像的全分辨率对应学习,这有助于图像翻译。我们采用层次结构策略,该策略使用粗糙级别的对应关系来指导良好的水平。在每个层次结构上,可以通过贴片磨练有效地计算该对应关系,以迭代地利用附近的匹配。在每个补丁摩擦迭代中,使用Convru模块不仅考虑到较大的上下文的匹配,还可以考虑历史估计值来完善当前的对应关系。提出的Cocosnet V2是一种GRU辅助贴片方法,是完全可区分且高效的。当通过图像翻译共同训练时,可以以无监督的方式建立完整的语义对应关系,从而促进基于示例的图像翻译。关于各种翻译任务的实验表明,Cocosnet V2在产生高分辨率图像方面的性能要比最先进的文献好得多。
We present the full-resolution correspondence learning for cross-domain images, which aids image translation. We adopt a hierarchical strategy that uses the correspondence from coarse level to guide the fine levels. At each hierarchy, the correspondence can be efficiently computed via PatchMatch that iteratively leverages the matchings from the neighborhood. Within each PatchMatch iteration, the ConvGRU module is employed to refine the current correspondence considering not only the matchings of larger context but also the historic estimates. The proposed CoCosNet v2, a GRU-assisted PatchMatch approach, is fully differentiable and highly efficient. When jointly trained with image translation, full-resolution semantic correspondence can be established in an unsupervised manner, which in turn facilitates the exemplar-based image translation. Experiments on diverse translation tasks show that CoCosNet v2 performs considerably better than state-of-the-art literature on producing high-resolution images.