论文标题
无监督的强化适应课堂不平衡的文本分类
Unsupervised Reinforcement Adaptation for Class-Imbalanced Text Classification
论文作者
论文摘要
当不同域中的训练和测试模型训练和测试模型时,阶级失衡就会自然存在。无监督的域适应(UDA)增加了模型性能,仅来自源域的可访问注释和来自目标域的未标记数据。但是,现有的最新UDA模型学习域不变表示,并主要根据跨域的类平衡数据进行评估。在这项工作中,我们通过强化学习提出了一种无监督的域适应方法,该方法共同利用了跨域的变体和不平衡标签。我们对文本分类任务进行了易于访问的数据集的实验,并将所提出的方法与五个基线进行比较。三个数据集上的实验证明,我们提出的方法可以有效地学习鲁棒域,不变的表示,并成功地将文本分类器在域上的不平衡类中调整。该代码可在https://github.com/woqingdoua/imbalanceclass上找到。
Class imbalance naturally exists when train and test models in different domains. Unsupervised domain adaptation (UDA) augments model performance with only accessible annotations from the source domain and unlabeled data from the target domain. However, existing state-of-the-art UDA models learn domain-invariant representations and evaluate primarily on class-balanced data across domains. In this work, we propose an unsupervised domain adaptation approach via reinforcement learning that jointly leverages feature variants and imbalanced labels across domains. We experiment with the text classification task for its easily accessible datasets and compare the proposed method with five baselines. Experiments on three datasets prove that our proposed method can effectively learn robust domain-invariant representations and successfully adapt text classifiers on imbalanced classes over domains. The code is available at https://github.com/woqingdoua/ImbalanceClass.