论文标题
使用具有知识蒸馏的双模式模型来增强语言识别
Enhance Language Identification using Dual-mode Model with Knowledge Distillation
论文作者
论文摘要
在本文中,我们建议使用知识蒸馏(KD)采用双模式框架(XSA-LID)模型,以增强其语言识别(LID)性能(LID)性能。双模式XSA-LID模型通过共同优化完整和短模式的培训,其各自的输入是全长的语音,其短片段是由特定的布尔人掩码提取的,而KD则用于进一步提高短语上的性能。此外,我们通过分析模仿语音剪辑的长度和位置来研究夹夹语言变异性和词汇完整性对盖子的影响。我们评估了NIST 2017 LRE的MLS14数据的方法。在3〜S随机位置布尔面罩的情况下,我们提出的方法分别获得了平均成本的19.23%,21.52%和8.37%的相对成本相对改善,而3s,10s和30s语音的XSA-LID模型分别获得了相对成本的相对提高。
In this paper, we propose to employ a dual-mode framework on the x-vector self-attention (XSA-LID) model with knowledge distillation (KD) to enhance its language identification (LID) performance for both long and short utterances. The dual-mode XSA-LID model is trained by jointly optimizing both the full and short modes with their respective inputs being the full-length speech and its short clip extracted by a specific Boolean mask, and KD is applied to further boost the performance on short utterances. In addition, we investigate the impact of clip-wise linguistic variability and lexical integrity for LID by analyzing the variation of LID performance in terms of the lengths and positions of the mimicked speech clips. We evaluated our approach on the MLS14 data from the NIST 2017 LRE. With the 3~s random-location Boolean mask, our proposed method achieved 19.23%, 21.52% and 8.37% relative improvement in average cost compared with the XSA-LID model on 3s, 10s, and 30s speech, respectively.