论文标题

剪切内层:有效的U-NET GAN的结构化修剪策略

Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs

论文作者

Kim, Bo-Kyeong, Choi, Shinkook, Park, Hancheol

论文摘要

修剪有效地压缩过度参数化模型。尽管修剪方法成功地用于判别模型,但将它们应用于生成模型的方法相对较少。这项研究对条件gan的U-NET发生器进行结构修剪。每层灵敏度分析证实,瓶颈附近的最内向层中存在许多不必要的过滤器,并且可以基本上修剪。基于此观察结果,我们从多个内层修剪这些过滤器,或通过完全消除层来提出替代体系结构。我们用Pix2Pix评估了图像到图像翻译的方法,而WAV2PLIP则用于语音驱动的说话面部生成。我们的方法的表现优于全球修剪基线,证明了正确考虑在哪里修剪U-NET发电机的重要性。

Pruning effectively compresses overparameterized models. Despite the success of pruning methods for discriminative models, applying them for generative models has been relatively rarely approached. This study conducts structured pruning on U-Net generators of conditional GANs. A per-layer sensitivity analysis confirms that many unnecessary filters exist in the innermost layers near the bottleneck and can be substantially pruned. Based on this observation, we prune these filters from multiple inner layers or suggest alternative architectures by completely eliminating the layers. We evaluate our approach with Pix2Pix for image-to-image translation and Wav2Lip for speech-driven talking face generation. Our method outperforms global pruning baselines, demonstrating the importance of properly considering where to prune for U-Net generators.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源