论文标题

适应性无监督的水下图像增强的自适应深度学习框架

Adaptive deep learning framework for robust unsupervised underwater image enhancement

论文作者

Saleh, Alzayat, Sheaves, Marcus, Jerry, Dean, Azghadi, Mostafa Rahimi

论文摘要

基于深度学习的水下图像增强的主要挑战之一是高质量培训数据的可用性有限。水下图像很难捕获,并且由于水的颜色和对比度的失真和对比度而言,质量通常很差。这使得很难在大型多样的数据集上训练受监督的深度学习模型,这可能会限制模型的性能。在本文中,我们探讨了一种替代监督水下图像增强的方法。具体而言,我们提出了一种新型的无监督的水下图像增强框架,该框架采用条件变异自动编码器(CVAE)来训练具有概率自适应实例归一化(PADAIN)和统计学的多色空间延伸的深度学习模型,从而产生了真实的水下水下图像。所得的框架由U-NET作为特征提取器和垫子组成,用于编码不确定性,我们称之为UDNET。为了提高UDNET产生的图像的视觉质量,我们使用统计引导的多色空间拉伸模块,以确保与输入图像的视觉一致性,并为使用地面真实图像提供训练的替代方案。提出的模型不需要手动人工注释,并且可以通过有限的数据学习并在水下图像上实现最先进的结果。我们在八个公共可用数据集上评估了我们提出的框架。结果表明,与定量和定性指标的其他最新方法相比,我们提出的框架可以产生竞争性能。可在https://github.com/alzayats/udnet上找到代码。

One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源