论文标题
使用深度学习来克隆意识形态和风格
Cloning Ideology and Style using Deep Learning
论文作者
论文摘要
在过去的几年中,由于其大规模应用,文本生成任务引起了研究人员的关注。在过去,许多研究人员专注于基于任务的文本世代。我们的研究专注于基于特定作者的意识形态和样式的文本生成,并在过去的主题上进行文本生成,这些主题是由同一作者编写的。我们的训练有素的模型需要迅速的文本,这些文字迅速构成了几个文本的文本,这些文本的文本却是几个文本的文本,这些文本的文本却是几个文本的文本,并且be以文本的形式,几个文本的文本效果,这些文字的文本效果几个文本的文本效果,这些文字的文本效果几乎没有任何文本,并且是少数文本的文本,这些文字的文本是依据的。训练该模型的作者。我们完成此任务的方法基于Bi-lstM. Bi-LSTM模型用于在角色层面进行预测,在特定作者的培训语料库中,与地面真相语料库一起使用。使用预培训的模型用于确定与作者的corment corment corment corments cornection cornection 2. percection 2 percection 2 percection 2 percection 2. pertection cornection 2 percection 2 perce cartect。实验显示在测试数据集上的困惑得分约为3。
Text generation tasks have gotten the attention of researchers in the last few years because of their applications on a large scale.In the past, many researchers focused on task-based text generations.Our research focuses on text generation based on the ideology and style of a specific author, and text generation on a topic that was not written by the same author in the past.Our trained model requires an input prompt containing initial few words of text to produce a few paragraphs of text based on the ideology and style of the author on which the model is trained.Our methodology to accomplish this task is based on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character level, during the training corpus of a specific author is used along with the ground truth corpus.A pre-trained model is used to identify the sentences of ground truth having contradiction with the author's corpus to make our language model inclined.During training, we have achieved a perplexity score of 2.23 at the character level. The experiments show a perplexity score of around 3 over the test dataset.