论文标题

有些人不值得听:通过最终用户团队的反馈定期重新培训分类器

Some people aren't worth listening to: periodically retraining classifiers with feedback from a team of end users

论文作者

Lockhart, Joshua, Assefa, Samuel, Balch, Tucker, Veloso, Manuela

论文摘要

文档分类在业务环境中无处不在,但是分类器的最终用户通常会与维护它的团队进行持续的反馈回路。我们从多代理的角度考虑了这种反馈 - 重新饰面循环,将最终用户视为自主代理,可提供分类器提供的标记数据的反馈。这使我们能够检查分类器对提供不可靠反馈的不可靠的最终用户的效果。我们演示了一个分类器,可以学习哪些用户往往是不可靠的,从而从循环中过滤了反馈,从而改善了随后的迭代中的性能。

Document classification is ubiquitous in a business setting, but often the end users of a classifier are engaged in an ongoing feedback-retrain loop with the team that maintain it. We consider this feedback-retrain loop from a multi-agent point of view, considering the end users as autonomous agents that provide feedback on the labelled data provided by the classifier. This allows us to examine the effect on the classifier's performance of unreliable end users who provide incorrect feedback. We demonstrate a classifier that can learn which users tend to be unreliable, filtering their feedback out of the loop, thus improving performance in subsequent iterations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源