论文标题
Q-lic:通过通道拆分量化学习的图像压缩
Q-LIC: Quantizing Learned Image Compression with Channel Splitting
论文作者
论文摘要
学习的图像压缩(LIC)通过传统手工制作的方法(例如VVC Intra)达到了可比的编码增益。但是,大型网络复杂性禁止在资源有限的嵌入式系统上使用LIC。网络量化是减轻网络负担的有效方法。本文通过通道拆分介绍了量化的LIC(QLIC)。首先,我们探讨了量化误差对重建误差的影响在各种通道中是不同的。其次,我们将量化对重建误差具有更大影响的通道分开。分裂后,降低了通道的动态范围,以便可以减少量化误差。最后,我们修剪几个渠道,以保持整体渠道的数量为原点。通过使用该提案,与先前的QLIC相比,在8位量化重量和主路的重量和激活的情况下,我们可以将BD速率降低0.61%-4.74%。此外,与最先进的网络量化方法相比,我们可以在量化MS-SSIM模型时获得更好的编码增益。此外,我们的建议可以与其他网络量化方法结合使用,以进一步改善编码增益。量化造成的中等编码损失验证了将来QLIC实现硬件的可行性。
Learned image compression (LIC) has reached a comparable coding gain with traditional hand-crafted methods such as VVC intra. However, the large network complexity prohibits the usage of LIC on resource-limited embedded systems. Network quantization is an efficient way to reduce the network burden. This paper presents a quantized LIC (QLIC) by channel splitting. First, we explore that the influence of quantization error to the reconstruction error is different for various channels. Second, we split the channels whose quantization has larger influence to the reconstruction error. After the splitting, the dynamic range of channels is reduced so that the quantization error can be reduced. Finally, we prune several channels to keep the number of overall channels as origin. By using the proposal, in the case of 8-bit quantization for weight and activation of both main and hyper path, we can reduce the BD-rate by 0.61%-4.74% compared with the previous QLIC. Besides, we can reach better coding gain compared with the state-of-the-art network quantization method when quantizing MS-SSIM models. Moreover, our proposal can be combined with other network quantization methods to further improve the coding gain. The moderate coding loss caused by the quantization validates the feasibility of the hardware implementation for QLIC in the future.