论文标题

模拟和建模对话搜索的风险

Simulating and Modeling the Risk of Conversational Search

论文作者

Wang, Zhenduo, Ai, Qingyao

论文摘要

在会话搜索中,代理可以通过提出澄清问题以增加机会找到更好的结果来与用户进行交互。 NLP和IR社区中的许多最新作品和共享任务都致力于确定澄清澄清的问题和生成方法的需求。这些作品假设提出澄清问题是检索结果的安全替代方法。由于现有的对话搜索模型远非完美,因此他们可以检索或产生不良澄清的问题是可能和常见的。当用户更喜欢搜索效率而不是正确性时,提出太多澄清问题也可能会消耗用户的耐心。因此,这些模型可以通过提出澄清问题来危害用户的搜索经验,并损害用户的搜索经验。 在这项工作中,我们提出了一个模拟框架,以模拟在会话搜索中提出问题的风险,并进一步修改风险感知的对话搜索模型以控制风险。我们通过在三个对话数据集上进行了广泛的实验,包括MSDialog,Ubuntu Dialog语料库和OpenDialKG,在其中将其与多个基线进行比较。我们表明,在大多数实验中,风险控制模块可以与两个不同的重新级别模型一起使用,并且胜过所有基准。

In conversational search, agents can interact with users by asking clarifying questions to increase their chance to find better results. Many recent works and shared tasks in both NLP and IR communities have focused on identifying the need of asking clarifying questions and methodologies of generating them. These works assume asking clarifying questions is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it's possible and common that they could retrieve or generate bad clarifying questions. Asking too many clarifying questions can also drain user's patience when the user prefers searching efficiency over correctness. Hence, these models can get backfired and harm user's search experience because of these risks by asking clarifying questions. In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model's robustness and effectiveness through extensive experiments on three conversations datasets, including MSDialog, Ubuntu Dialog Corpus, and Opendialkg in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all the baselines in most of our experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源