论文标题
多通道核定标准减去Frobenius Norm norm Minimive颜色图像Denoising
Multi-channel Nuclear Norm Minus Frobenius Norm Minimization for Color Image Denoising
论文作者
论文摘要
在各种图像处理和计算机视觉任务中经常遇到颜色图像Denoising。一种传统的策略是将RGB图像转换为较小相关的颜色空间,并分别将新空间的每个通道定义。但是,这种策略无法完全利用渠道之间的相关信息,并且不足以获得令人满意的结果。为了解决这个问题,本文提出了一个新的多通道优化模型,用于在核标准下降低Frobenius Norm Minimation框架下的颜色图像deno。具体而言,根据块匹配,将颜色图像分解为重叠的RGB补丁。对于每个补丁,我们堆叠其相似的邻居以形成相应的补丁矩阵。提出的模型是在补丁矩阵上执行的,以恢复其无噪声版本。在恢复过程中,a)引入权重矩阵以完全利用通道之间的噪声差; b)单数值是自适应缩小的,而无需再分配权重。有了他们,提议的模型可以在保持简单的同时取得有希望的结果。为了解决提出的模型,基于乘数框架的交替方向方法构建了一种准确有效的算法。每个更新步骤的解决方案可以在封闭式中分析表达。严格的理论分析证明了所提出的算法产生的解决方案序列会收敛到其各自的固定点。合成和真实噪声数据集的实验结果证明了所提出的模型优于最先进的模型。
Color image denoising is frequently encountered in various image processing and computer vision tasks. One traditional strategy is to convert the RGB image to a less correlated color space and denoise each channel of the new space separately. However, such a strategy can not fully exploit the correlated information between channels and is inadequate to obtain satisfactory results. To address this issue, this paper proposes a new multi-channel optimization model for color image denoising under the nuclear norm minus Frobenius norm minimization framework. Specifically, based on the block-matching, the color image is decomposed into overlapping RGB patches. For each patch, we stack its similar neighbors to form the corresponding patch matrix. The proposed model is performed on the patch matrix to recover its noise-free version. During the recovery process, a) a weight matrix is introduced to fully utilize the noise difference between channels; b) the singular values are shrunk adaptively without additionally assigning weights. With them, the proposed model can achieve promising results while keeping simplicity. To solve the proposed model, an accurate and effective algorithm is built based on the alternating direction method of multipliers framework. The solution of each updating step can be analytically expressed in closed-from. Rigorous theoretical analysis proves the solution sequences generated by the proposed algorithm converge to their respective stationary points. Experimental results on both synthetic and real noise datasets demonstrate the proposed model outperforms state-of-the-art models.