论文标题

降低densenet的连接用于图像识别

Connection Reduction of DenseNet for Image Recognition

论文作者

Ju, Rui-Yang, Chiang, Jen-Shiun, Chen, Chih-Chia, Lin, Yu-Shian

论文摘要

卷积神经网络(CNN)通过堆叠卷积层增加深度,而更深的网络模型在图像识别方面的表现更好。经验研究表明,简单地堆叠卷积层不会使网络训练更好,而跳过连接(残留学习)可以改善网络模型的性能。对于图像分类任务,具有全球密集连接体系结构的模型在ImageNet等大型数据集中表现良好,但不适用于CIFAR-10和SVHN等小型数据集。与密集的连接不同,我们提出了两种连接层的新算法。基线是一个密集的连接网络,由两个新算法连接的网络分别命名为ShortNet1和ShortNet2。 CIFAR-10和SVHN上图像分类的实验结果表明,ShortNet1的测试错误率低5%,推理时间比基线快25%。 ShortNet2将推理时间加快了40%,测试准确性损失较小。代码和预训练模型可在https://github.com/ruiyangju/connection_reduction上找到。

Convolutional Neural Networks (CNN) increase depth by stacking convolutional layers, and deeper network models perform better in image recognition. Empirical research shows that simply stacking convolutional layers does not make the network train better, and skip connection (residual learning) can improve network model performance. For the image classification task, models with global densely connected architectures perform well in large datasets like ImageNet, but are not suitable for small datasets such as CIFAR-10 and SVHN. Different from dense connections, we propose two new algorithms to connect layers. Baseline is a densely connected network, and the networks connected by the two new algorithms are named ShortNet1 and ShortNet2 respectively. The experimental results of image classification on CIFAR-10 and SVHN show that ShortNet1 has a 5% lower test error rate and 25% faster inference time than Baseline. ShortNet2 speeds up inference time by 40% with less loss in test accuracy. Code and pre-trained models are available at https://github.com/RuiyangJu/Connection_Reduction.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源