论文标题
Diffsound:文本对发电的离散扩散模型
Diffsound: Discrete Diffusion Model for Text-to-sound Generation
论文作者
论文摘要
产生人类想要的声音效果是一个重要的话题。但是,在这一领域,很少有研究声音。在这项研究中,我们调查了以文本提示为条件的声音,并提出了一个新型的文本对生成框架,该框架由文本编码器组成,矢量量化了变异自动编码器(VQ-VAE),解码器和vocoder。该框架首先使用解码器将从文本编码器提取的文本特征传递到借助VQ-VAE的MEL光谱图中,然后使用Vocoder将生成的MEL光谱图转换为波形。我们发现,解码器显着影响生成性能。因此,我们专注于在这项研究中设计一个好的解码器。我们从传统的自动回解码器开始,该解码器已被证明是以前的Sound Generation Works的最先进方法。但是,AR解码器始终按顺序预测MEL-SPECTROGIN图令牌,这引入了单向偏见和错误问题的积累。此外,使用AR解码器,声音生成时间随着声音持续时间线性增加。为了克服AR解码器引入的缺点,我们提出了一个基于离散扩散模型的非自动回归解码器,称为DiffSound。具体而言,DIFFSOUND可以在一个步骤中预测所有MEL-SPECTROGIN图令牌,然后在下一步中完善预测的令牌,因此可以在几个步骤后获得最优于预测的结果。我们的实验表明,与AR解码器相比,我们提出的差异不仅会产生更好的文本对生成结果,而且还具有更快的生成速度,例如MOS:3.56 \ textit {V.S} 2.786,并且生成速度比AR解码器快五倍。
Generating sound effects that humans want is an important topic. However, there are few studies in this area for sound generation. In this study, we investigate generating sound conditioned on a text prompt and propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder. The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform. We found that the decoder significantly influences the generation performance. Thus, we focus on designing a good decoder in this study. We begin with the traditional autoregressive decoder, which has been proved as a state-of-the-art method in previous sound generation works. However, the AR decoder always predicts the mel-spectrogram tokens one by one in order, which introduces the unidirectional bias and accumulation of errors problems. Moreover, with the AR decoder, the sound generation time increases linearly with the sound duration. To overcome the shortcomings introduced by AR decoders, we propose a non-autoregressive decoder based on the discrete diffusion model, named Diffsound. Specifically, the Diffsound predicts all of the mel-spectrogram tokens in one step and then refines the predicted tokens in the next step, so the best-predicted results can be obtained after several steps. Our experiments show that our proposed Diffsound not only produces better text-to-sound generation results when compared with the AR decoder but also has a faster generation speed, e.g., MOS: 3.56 \textit{v.s} 2.786, and the generation speed is five times faster than the AR decoder.