论文标题
代码切换文本数据的渐进式情感分析
Progressive Sentiment Analysis for Code-Switched Text Data
论文作者
论文摘要
多语言变压器语言模型最近引起了研究人员的广泛关注,并用于许多NLP任务(例如文本分类和命名实体识别)中的跨语性转移学习。 However, similar methods for transfer learning from monolingual text to code-switched text have not been extensively explored mainly due to the following challenges: (1) Code-switched corpus, unlike monolingual corpus, consists of more than one language and existing methods can't be applied efficiently, (2) Code-switched corpus is usually made of resource-rich and low-resource languages and upon using multilingual pre-trained language models, the final model可能会偏向资源丰富的语言。在本文中,我们专注于代码开关的情感分析,其中有一个标记为资源丰富的语言数据集和未标记的代码切换数据。我们提出了一个将资源丰富和低资源语言之间区分的框架。我们没有立即对整个代码转换语料库进行培训,而是根据资源丰富的语言中的单词的比例创建存储桶,并逐渐从资源丰富的语言主导的样本训练到低资源语言主导的样本。跨多种语言对的广泛实验表明,渐进式培训有助于低资源语言主导的样本。
Multilingual transformer language models have recently attracted much attention from researchers and are used in cross-lingual transfer learning for many NLP tasks such as text classification and named entity recognition. However, similar methods for transfer learning from monolingual text to code-switched text have not been extensively explored mainly due to the following challenges: (1) Code-switched corpus, unlike monolingual corpus, consists of more than one language and existing methods can't be applied efficiently, (2) Code-switched corpus is usually made of resource-rich and low-resource languages and upon using multilingual pre-trained language models, the final model might bias towards resource-rich language. In this paper, we focus on code-switched sentiment analysis where we have a labelled resource-rich language dataset and unlabelled code-switched data. We propose a framework that takes the distinction between resource-rich and low-resource language into account. Instead of training on the entire code-switched corpus at once, we create buckets based on the fraction of words in the resource-rich language and progressively train from resource-rich language dominated samples to low-resource language dominated samples. Extensive experiments across multiple language pairs demonstrate that progressive training helps low-resource language dominated samples.