论文标题

有说服力的对话理解:基线和负面结果

Persuasive Dialogue Understanding: the Baselines and Negative Results

论文作者

Chen, Hui, Ghosal, Deepanway, Majumder, Navonil, Hussain, Amir, Poria, Soujanya

论文摘要

说服力旨在通过一系列有说服力的策略来形成自己的意见和行动。由于其在说服性对话系统中的潜在应用,有说服力的战略认识的任务最近引起了人们的关注。对话系统中用户意图识别的先前方法采用了经常性神经网络(RNN)或卷积神经网络(CNN)来模拟对话历史上的上下文,从而忽略了战术历史记录和言论中的关系。在本文中,我们证明了基于变压器的方法以及有条件的随机场(CRF)的局限性,以实现有说服力的策略识别任务。在此模型中,我们利用言论和内在语言语义特征以及标签依赖性来提高识别率。尽管进行了广泛的高参数优化,但该体系结构未能优于基线方法。我们观察到两个负面结果。首先,CRF无法捕获有说服力的标签依赖性,可能是因为有说服力的对话中的策略不会遵循任何严格的语法或规则,因为命名实体识别(NER)中的案例(NER)或词性词性(POS)标记。其次,与长期记忆(LSTM)相比,从头开始训练的变压器编码器在说服力对话中捕获顺序信息的能力较低。我们将其归因于香草变压器编码器没有有效考虑序列元素的相对位置信息的原因。

Persuasion aims at forming one's opinion and action via a series of persuasive messages containing persuader's strategies. Due to its potential application in persuasive dialogue systems, the task of persuasive strategy recognition has gained much attention lately. Previous methods on user intent recognition in dialogue systems adopt recurrent neural network (RNN) or convolutional neural network (CNN) to model context in conversational history, neglecting the tactic history and intra-speaker relation. In this paper, we demonstrate the limitations of a Transformer-based approach coupled with Conditional Random Field (CRF) for the task of persuasive strategy recognition. In this model, we leverage inter- and intra-speaker contextual semantic features, as well as label dependencies to improve the recognition. Despite extensive hyper-parameter optimizations, this architecture fails to outperform the baseline methods. We observe two negative results. Firstly, CRF cannot capture persuasive label dependencies, possibly as strategies in persuasive dialogues do not follow any strict grammar or rules as the cases in Named Entity Recognition (NER) or part-of-speech (POS) tagging. Secondly, the Transformer encoder trained from scratch is less capable of capturing sequential information in persuasive dialogues than Long Short-Term Memory (LSTM). We attribute this to the reason that the vanilla Transformer encoder does not efficiently consider relative position information of sequence elements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源