论文标题
在多语种环境中的人类仇恨言语分类
Human-in-the-Loop Hate Speech Classification in a Multilingual Context
论文作者
论文摘要
公开辩论向数字领域的转变伴随着在线仇恨言论的上升。尽管已经提出了许多有前途的仇恨言语分类方法,但研究通常仅着眼于一种单一语言,通常是英语,并且不会解决三个关键问题:解雇后表现,分类器维护和基础设施限制。在本文中,我们介绍了一个新的基于BERT的人类仇恨言语分类管道,并从最初的数据收集和注释一直探索其发展到寄存后。我们的分类器使用来自我们原始的422k示例的原始语料库的数据进行了训练,专门针对瑞士固有的多语言设置,其F1得分为80.5,其F1得分为80.5,目前最出色的BERT多语言分类器在德语中以5.8 f1点为5.8 f1点,在法语中为3.6 F1。我们在12个月内进行的系统评估进一步强调了连续的,人类的分类器维护的至关重要的重要性,以确保持续性的仇恨言论分类。
The shift of public debate to the digital sphere has been accompanied by a rise in online hate speech. While many promising approaches for hate speech classification have been proposed, studies often focus only on a single language, usually English, and do not address three key concerns: post-deployment performance, classifier maintenance and infrastructural limitations. In this paper, we introduce a new human-in-the-loop BERT-based hate speech classification pipeline and trace its development from initial data collection and annotation all the way to post-deployment. Our classifier, trained using data from our original corpus of over 422k examples, is specifically developed for the inherently multilingual setting of Switzerland and outperforms with its F1 score of 80.5 the currently best-performing BERT-based multilingual classifier by 5.8 F1 points in German and 3.6 F1 points in French. Our systematic evaluations over a 12-month period further highlight the vital importance of continuous, human-in-the-loop classifier maintenance to ensure robust hate speech classification post-deployment.