论文标题
一个统一的框架,用于多发性口语理解并提示
A Unified Framework for Multi-intent Spoken Language Understanding with prompting
论文作者
论文摘要
多英着语言理解具有广泛实施的巨大潜力。共同建模意图检测和插槽填充它提供了一个通道,以利用意图与插槽之间的相关性。但是,当前的方法倾向于以不同的方式提出这两个子任务,这导致了两个问题:1)它阻碍了模型的有效提取共享特征。 2)涉及相当复杂的结构以增强表达能力,同时损害框架的解释性。在这项工作中,我们描述了一种基于迅速的口语理解(提示)框架,通过提供常见的预训练的SEQ2SEQ模型,直观地将两个子任务统一为同一形式。详细说明,ID和SF是通过简单地将话语填充到特定于任务的提示模板中,并共享键值对序列的输出格式来完成。此外,首先预测可变意图,然后自然嵌入到提示中,从语义角度指导插槽值对推理。最后,我们受到普遍的多任务学习的启发,以引入辅助子任务,这有助于学习提供的标签之间的关系。实验结果表明,我们的框架在两个公共数据集上的表现优于几个最先进的基线。
Multi-intent Spoken Language Understanding has great potential for widespread implementation. Jointly modeling Intent Detection and Slot Filling in it provides a channel to exploit the correlation between intents and slots. However, current approaches are apt to formulate these two sub-tasks differently, which leads to two issues: 1) It hinders models from effective extraction of shared features. 2) Pretty complicated structures are involved to enhance expression ability while causing damage to the interpretability of frameworks. In this work, we describe a Prompt-based Spoken Language Understanding (PromptSLU) framework, to intuitively unify two sub-tasks into the same form by offering a common pre-trained Seq2Seq model. In detail, ID and SF are completed by concisely filling the utterance into task-specific prompt templates as input, and sharing output formats of key-value pairs sequence. Furthermore, variable intents are predicted first, then naturally embedded into prompts to guide slot-value pairs inference from a semantic perspective. Finally, we are inspired by prevalent multi-task learning to introduce an auxiliary sub-task, which helps to learn relationships among provided labels. Experiment results show that our framework outperforms several state-of-the-art baselines on two public datasets.