论文标题

通过小组交流的超轻量级语音分离

Ultra-Lightweight Speech Separation via Group Communication

论文作者

Luo, Yi, Han, Cong, Mesgarani, Nima

论文摘要

模型大小和复杂性仍然是在低资源设备(例如耳机和助听器)上部署语音增强和分离系统的最大挑战。尽管可以将诸如压缩,蒸馏和量化之类的方法应用于大型模型,但它们通常在模型性能上带有成本。在本文中,我们提供了一个简单的模型设计范式,该范式在不牺牲性能的情况下明确设计超轻量级模型。由子频段频率-LSTM(F-LSTM)体系结构的激励,我们介绍了组通信(GroupComm),其中特征向量分为较小的组,并使用一个小的处理块来执行组间通信。与标准的F-LSTM模型不同的是,在串联串联的情况下,在所有组的所有组上都应用了一个超小型模块,这允许模型大小显着减小。实验结果表明,与已经轻量级的强基线模型相比,GroupComm可以在PAR性能上获得35.6倍的参数,并且操作少的2.3倍。

Model size and complexity remain the biggest challenges in the deployment of speech enhancement and separation systems on low-resource devices such as earphones and hearing aids. Although methods such as compression, distillation and quantization can be applied to large models, they often come with a cost on the model performance. In this paper, we provide a simple model design paradigm that explicitly designs ultra-lightweight models without sacrificing the performance. Motivated by the sub-band frequency-LSTM (F-LSTM) architectures, we introduce the group communication (GroupComm), where a feature vector is split into smaller groups and a small processing block is used to perform inter-group communication. Unlike standard F-LSTM models where the sub-band outputs are concatenated, an ultra-small module is applied on all the groups in parallel, which allows a significant decrease on the model size. Experiment results show that comparing with a strong baseline model which is already lightweight, GroupComm can achieve on par performance with 35.6 times fewer parameters and 2.3 times fewer operations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源