论文标题
Adaint:在实时图像增强上的3D查找表的学习自适应间隔
AdaInt: Learning Adaptive Intervals for 3D Lookup Tables on Real-time Image Enhancement
论文作者
论文摘要
3D查找表(3D LUT)是用于实时图像增强任务的高效工具,该工具通过将其稀疏采样为离散的3D晶格来建模非线性3D颜色变换。先前的工作已经努力学习LUTS的图像自适应输出颜色值,以进行灵活增强,但忽略了抽样策略的重要性。他们采用了一个最佳的均匀采样点分配,限制了学习LUT的表现力,因为LUT变换中均匀采样点之间的(三)线性插值可能无法模拟颜色变换的局部非线性。在关注这个问题时,我们提出了Ardaint(自适应间隔学习),这是一种新的机制,可以通过自适应地学习3D色彩空间中的非均匀采样间隔来实现更灵活的采样点分配。这样,3D LUT可以通过在需要高度非线性变换的颜色范围内进行密集的采样来提高其能力,并为近线性变换进行稀疏采样。所提出的饰品可以作为基于3D LUT的方法的紧凑而有效的插件模块实现。为了启用对座位的端到端学习,我们设计了一个名为AiLut-Transform(自适应间隔LUT变换)的新型可区分运算符,以在非均匀的3D LUT中定位输入颜色,并为采样间隔提供梯度。实验表明,配备了辅助的方法可以在两个公共基准数据集上实现最先进的性能,而间接费用可以忽略不计。我们的源代码可在https://github.com/imcharlesy/Adaint上获得。
The 3D Lookup Table (3D LUT) is a highly-efficient tool for real-time image enhancement tasks, which models a non-linear 3D color transform by sparsely sampling it into a discretized 3D lattice. Previous works have made efforts to learn image-adaptive output color values of LUTs for flexible enhancement but neglect the importance of sampling strategy. They adopt a sub-optimal uniform sampling point allocation, limiting the expressiveness of the learned LUTs since the (tri-)linear interpolation between uniform sampling points in the LUT transform might fail to model local non-linearities of the color transform. Focusing on this problem, we present AdaInt (Adaptive Intervals Learning), a novel mechanism to achieve a more flexible sampling point allocation by adaptively learning the non-uniform sampling intervals in the 3D color space. In this way, a 3D LUT can increase its capability by conducting dense sampling in color ranges requiring highly non-linear transforms and sparse sampling for near-linear transforms. The proposed AdaInt could be implemented as a compact and efficient plug-and-play module for a 3D LUT-based method. To enable the end-to-end learning of AdaInt, we design a novel differentiable operator called AiLUT-Transform (Adaptive Interval LUT Transform) to locate input colors in the non-uniform 3D LUT and provide gradients to the sampling intervals. Experiments demonstrate that methods equipped with AdaInt can achieve state-of-the-art performance on two public benchmark datasets with a negligible overhead increase. Our source code is available at https://github.com/ImCharlesY/AdaInt.