论文标题
专注于聆听,一次拼写:通过非自动回忆架构的整个句子生成低延迟的语音识别
Listen Attentively, and Spell Once: Whole Sentence Generation via a Non-Autoregressive Architecture for Low-Latency Speech Recognition
论文作者
论文摘要
尽管基于注意力的端到端模型在语音识别方面取得了有希望的表现,但梁搜索中的多通远程计算增加了推理时间成本,这限制了其实际应用。为了解决这个问题,我们提出了一个名为LASO的非自动回归端到端语音识别系统(专心听,一次拼写)。由于非自动性属性,LASO预测了序列中的文本令牌,而无需依赖其他令牌。如果没有光束搜索,一通传播大大降低了拉索的推理时间成本。而且由于模型基于基于注意力的前馈结构,因此可以有效地实现计算。我们对中国公开数据集Aishell-1进行实验。 LASO达到6.4%的字符错误率,这表现优于最新的自回归变压器模型(6.7%)。平均推断潜伏期为21 ms,是自回旋变压器模型的1/50。
Although attention based end-to-end models have achieved promising performance in speech recognition, the multi-pass forward computation in beam-search increases inference time cost, which limits their practical applications. To address this issue, we propose a non-autoregressive end-to-end speech recognition system called LASO (listen attentively, and spell once). Because of the non-autoregressive property, LASO predicts a textual token in the sequence without the dependence on other tokens. Without beam-search, the one-pass propagation much reduces inference time cost of LASO. And because the model is based on the attention based feedforward structure, the computation can be implemented in parallel efficiently. We conduct experiments on publicly available Chinese dataset AISHELL-1. LASO achieves a character error rate of 6.4%, which outperforms the state-of-the-art autoregressive transformer model (6.7%). The average inference latency is 21 ms, which is 1/50 of the autoregressive transformer model.