论文标题
学习建模编辑过程
Learning to Model Editing Processes
论文作者
论文摘要
大多数现有的序列生成模型通常从一通过,通常是从右翼产生输出。但是,这与人类在生成内容中使用的一种更自然的方法形成鲜明对比。迭代精致和编辑。最近的工作已经引入了针对各种任务的基于编辑的模型(例如神经机器翻译和文本样式传输),但是这些通常通常对单个编辑步骤进行建模。在这项工作中,我们建议对编辑过程进行建模,对迭代生成序列的整个过程进行建模。我们形成了一个概念框架来描述多步编辑的可能性,并描述可以根据这些多步骤编辑学习序列的生成模型的神经模型。我们在此任务上介绍了基线结果和指标,发现与以前的单步编辑模型相比,建模编辑过程可以改善我们所提出的任务和相关下游任务的各种轴的性能。
Most existing sequence generation models produce outputs in one pass, usually left-to-right. However, this is in contrast with a more natural approach that humans use in generating content; iterative refinement and editing. Recent work has introduced edit-based models for various tasks (such as neural machine translation and text style transfer), but these generally model a single edit step. In this work, we propose modeling editing processes, modeling the whole process of iteratively generating sequences. We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multistep edits. We introduce baseline results and metrics on this task, finding that modeling editing processes improves performance on a variety of axes on both our proposed task and related downstream tasks compared to previous single-step models of edits.