论文标题
基于音频字幕的频繁类的多任务正则化
Multi-task Regularization Based on Infrequent Classes for Audio Captioning
论文作者
论文摘要
音频字幕是一项多模式任务,重点是使用自然语言来描述一般音频的内容。大多数音频字幕方法都基于深层神经网络,采用编码器编码器方案和带有音频剪辑和相应自然语言描述(即标题)的数据集。音频字幕的一个重大挑战是字幕中的单词分布:有些单词非常频繁,但在声学上是非信息的,即函数词(例如“ a”,“”),而其他单词则很不频繁,但内容丰富,即内容词(例如,形容词,名词,名词)。在本文中,我们提出了两种减轻此类不平衡问题的方法。首先,在用于音频字幕的自动编码器设置中,我们权衡了每个单词对培训损失的贡献,与整个数据集中的发生次数成反比。其次,除了多级文字级音频字幕任务外,我们还通过训练单独的解码器来定义基于剪贴级内容的多标签侧任务。我们使用第二任任务的损失来使经过训练的编码器正规化音频字幕任务。我们使用最近发表的广泛音频字幕数据集评估了我们的方法,我们的结果显示,蜘蛛指标比基线方法增加了37 \%的相对改进。
Audio captioning is a multi-modal task, focusing on using natural language for describing the contents of general audio. Most audio captioning methods are based on deep neural networks, employing an encoder-decoder scheme and a dataset with audio clips and corresponding natural language descriptions (i.e. captions). A significant challenge for audio captioning is the distribution of words in the captions: some words are very frequent but acoustically non-informative, i.e. the function words (e.g. "a", "the"), and other words are infrequent but informative, i.e. the content words (e.g. adjectives, nouns). In this paper we propose two methods to mitigate this class imbalance problem. First, in an autoencoder setting for audio captioning, we weigh each word's contribution to the training loss inversely proportional to its number of occurrences in the whole dataset. Secondly, in addition to multi-class, word-level audio captioning task, we define a multi-label side task based on clip-level content word detection by training a separate decoder. We use the loss from the second task to regularize the jointly trained encoder for the audio captioning task. We evaluate our method using Clotho, a recently published, wide-scale audio captioning dataset, and our results show an increase of 37\% relative improvement with SPIDEr metric over the baseline method.