论文标题
单图超分辨率的分层剩余注意网络
Hierarchical Residual Attention Network for Single Image Super-Resolution
论文作者
论文摘要
卷积神经网络是单图超分辨率中最成功的模型。更深层次的网络,剩余连接和注意机制进一步提高了其性能。但是,这些策略通常以大大增加计算成本为代价改善重建性能。本文基于一种有效的残留特征和注意力聚集的方法引入了一种新的轻质超分辨率模型。为了有效地利用残差功能,将其分层汇总到功能库中,以在网络输出处进行后验。同时,轻巧的分层注意机制将网络中最相关的功能提取到注意力银行,以改善最终输出并通过网络内的连续操作来防止信息丢失。因此,该处理被分为可以同时进行的两个独立的计算路径,从而导致了一个高效有效的模型,用于从其低分辨率对应物中重建高分辨率图像的细节。我们提出的架构在几个数据集中超过了最新的性能,同时保持了相对较低的计算和内存足迹。
Convolutional neural networks are the most successful models in single image super-resolution. Deeper networks, residual connections, and attention mechanisms have further improved their performance. However, these strategies often improve the reconstruction performance at the expense of considerably increasing the computational cost. This paper introduces a new lightweight super-resolution model based on an efficient method for residual feature and attention aggregation. In order to make an efficient use of the residual features, these are hierarchically aggregated into feature banks for posterior usage at the network output. In parallel, a lightweight hierarchical attention mechanism extracts the most relevant features from the network into attention banks for improving the final output and preventing the information loss through the successive operations inside the network. Therefore, the processing is split into two independent paths of computation that can be simultaneously carried out, resulting in a highly efficient and effective model for reconstructing fine details on high-resolution images from their low-resolution counterparts. Our proposed architecture surpasses state-of-the-art performance in several datasets, while maintaining relatively low computation and memory footprint.