论文标题
了解基于方面情感分析的预训练的BERT
Understanding Pre-trained BERT for Aspect-based Sentiment Analysis
论文作者
论文摘要
本文分析了从对基于方面情感分析(ABSA)任务的评论中学到的预先训练的隐藏表示形式。我们的工作是由ABSA基于BERT的语言模型的最新进展激发的。但是,目前尚不清楚在没有标记的语料库中训练的(掩盖)语言模型的一般代理任务,而没有各个方面或观点的注释可以为ABSA中下游任务提供重要功能。通过利用ABSA中注释的数据集,我们研究了在评论中预先培训的BERT的关注和学习的表示。我们发现,伯特(Bert)使用很少的自我注意力头来编码上下文单词(例如指示方面的介词或代词)和一方的意见单词。代表方面的大多数功能都致力于域(或产品类别)和方面本身的细颗粒语义,而不是从其上下文中携带摘要的意见。我们希望这项调查可以帮助未来的研究改善自我监督的学习,无监督的学习和对ABSA的微调。可以在https://github.com/howardhsu/bert-for-rrc-absa上找到预训练的模型和代码。
This paper analyzes the pre-trained hidden representations learned from reviews on BERT for tasks in aspect-based sentiment analysis (ABSA). Our work is motivated by the recent progress in BERT-based language models for ABSA. However, it is not clear how the general proxy task of (masked) language model trained on unlabeled corpus without annotations of aspects or opinions can provide important features for downstream tasks in ABSA. By leveraging the annotated datasets in ABSA, we investigate both the attentions and the learned representations of BERT pre-trained on reviews. We found that BERT uses very few self-attention heads to encode context words (such as prepositions or pronouns that indicating an aspect) and opinion words for an aspect. Most features in the representation of an aspect are dedicated to the fine-grained semantics of the domain (or product category) and the aspect itself, instead of carrying summarized opinions from its context. We hope this investigation can help future research in improving self-supervised learning, unsupervised learning and fine-tuning for ABSA. The pre-trained model and code can be found at https://github.com/howardhsu/BERT-for-RRC-ABSA.