论文标题
非对称双重编码器U-NET,用于关节降雨和雾霾去除
Asymmetric Dual-Decoder U-Net for Joint Rain and Haze Removal
论文作者
论文摘要
这项工作研究了关节降雨和雾霾清除问题。在现实的情况下,雨水和阴霾,两个经常同时发生的共同天气现象可以极大地降低场景图像的清晰度和质量,从而导致视觉应用的性能下降,例如自动驾驶。但是,在场景图像中共同消除雨水和雾霾是艰难而挑战性的,在雾兹和雨水的存在以及大气光的变化中,都可以降低现场信息。当前的方法集中在污染部分上,因此忽略了受大气光变化影响的场景信息的恢复。我们提出了一个新颖的深神经网络,称为非对称双重编码器U-NET(ADU-NET),以应对上述挑战。 ADU-NET同时产生污染的残留物和残留的现场,以有效地去除雨水和雾霾,同时保留场景信息的保真度。广泛的实验表明,我们的工作在合成数据和现实世界数据基准(包括RainCityScapes,Bid Rain和Spa-Data)的相当大的差距中优于现有的最新方法。例如,我们在RainCityScapes/spa-data上分别将最新的PSNR值提高了2.26/4.57。 代码将免费提供给研究社区。
This work studies the joint rain and haze removal problem. In real-life scenarios, rain and haze, two often co-occurring common weather phenomena, can greatly degrade the clarity and quality of the scene images, leading to a performance drop in the visual applications, such as autonomous driving. However, jointly removing the rain and haze in scene images is ill-posed and challenging, where the existence of haze and rain and the change of atmosphere light, can both degrade the scene information. Current methods focus on the contamination removal part, thus ignoring the restoration of the scene information affected by the change of atmospheric light. We propose a novel deep neural network, named Asymmetric Dual-decoder U-Net (ADU-Net), to address the aforementioned challenge. The ADU-Net produces both the contamination residual and the scene residual to efficiently remove the rain and haze while preserving the fidelity of the scene information. Extensive experiments show our work outperforms the existing state-of-the-art methods by a considerable margin in both synthetic data and real-world data benchmarks, including RainCityscapes, BID Rain, and SPA-Data. For instance, we improve the state-of-the-art PSNR value by 2.26/4.57 on the RainCityscapes/SPA-Data, respectively. Codes will be made available freely to the research community.