论文标题
探索端到端的多通道ASR,并具有偏见信息以满足转录
Exploring End-to-End Multi-channel ASR with Bias Information for Meeting Transcription
论文作者
论文摘要
多通道前端和自动语音识别(ASR)的联合优化引起了极大的兴趣。尽管已经报道了各种任务的有希望的结果,但过去对其会议转录应用的研究仅限于小型实验。目前尚不清楚这种联合框架是否可以对更实用的设置有益,在这种设置中,可以利用大量的单个通道训练数据来构建强大的ASR后端。在这项工作中,我们介绍了对基于掩模的光束器和基于注意编码器的联合建模的研究,在我们有75K小时的单渠道数据以及相对较少的用于模型训练的真实多通道数据的环境中。我们探索有效的培训程序,包括比较模拟和真实的多通道培训数据。为了指导对目标发言人的认可并处理重叠的语音,我们还探索了偏见信息的各种组合,例如到达的方向和说话者概况。我们提出了一种有效的位置偏见集成方法,称为Beamformer网络的深度串联。在我们对各种会议记录的评估中,我们表明所提出的框架可实现大量的单词错误率降低。
Joint optimization of multi-channel front-end and automatic speech recognition (ASR) has attracted much interest. While promising results have been reported for various tasks, past studies on its meeting transcription application were limited to small scale experiments. It is still unclear whether such a joint framework can be beneficial for a more practical setup where a massive amount of single channel training data can be leveraged for building a strong ASR back-end. In this work, we present our investigation on the joint modeling of a mask-based beamformer and Attention-Encoder-Decoder-based ASR in the setting where we have 75k hours of single-channel data and a relatively small amount of real multi-channel data for model training. We explore effective training procedures, including a comparison of simulated and real multi-channel training data. To guide the recognition towards a target speaker and deal with overlapped speech, we also explore various combinations of bias information, such as direction of arrivals and speaker profiles. We propose an effective location bias integration method called deep concatenation for the beamformer network. In our evaluation on various meeting recordings, we show that the proposed framework achieves a substantial word error rate reduction.