论文标题

审计YouTube的建议算法错误信息滤泡气泡

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

论文作者

Srba, Ivan, Moro, Robert, Tomlein, Matus, Pecher, Branislav, Simko, Jakub, Stefancova, Elena, Kompan, Michal, Hrckova, Andrea, Podrouzek, Juraj, Gavornik, Adrian, Bielikova, Maria

论文摘要

在本文中,我们介绍了一项旨在调查用户可以进入错误信息滤泡气泡的速度的审计研究的结果,但也需要“破坏气泡”,即恢复气泡外壳。我们采用了袜子木偶审核方法,其中预编程的代理(用作YouTube用户)通过观看促进内容的错误信息来深入研究错误信息信息滤泡气泡。然后,他们试图破坏气泡,并通过观看错误信息揭穿内容来达到更加平衡的建议。我们记录搜索结果,主页结果和观察视频的建议。总体而言,我们录制了17,405个独特的视频,其中我们手动注释了2,914个,以实现错误信息。标记的数据用于训练机器学习模型,将视频分类为三类(促进,揭穿,中性),准确性为0.82。我们使用训练有素的模型来对剩余的视频进行分类,而这些视频是不可行的。 使用手动和自动注释的数据,我们观察到一系列审核主题的错误信息泡泡动力学。我们的主要发现是,即使过滤气泡在某些情况下没有出现,但在它们这样做时,可以通过观看错误信息揭露内容来破坏它们(尽管它在主题到主题之间的表现都不同)。我们还观察到错误信息滤泡效应的突然减少,当错误的信息揭露视频后,在促进视频的错误信息后观看了视频,这表明建议的强烈背景性。最后,当将我们的结果与先前的类似研究进行比较时,我们没有观察到建议的错误信息含量的总数有显着改善。

In this paper, we present results of an auditing study performed over YouTube aimed at investigating how fast a user can get into a misinformation filter bubble, but also what it takes to "burst the bubble", i.e., revert the bubble enclosure. We employ a sock puppet audit methodology, in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content. Then they try to burst the bubbles and reach more balanced recommendations by watching misinformation debunking content. We record search results, home page results, and recommendations for the watched videos. Overall, we recorded 17,405 unique videos, out of which we manually annotated 2,914 for the presence of misinformation. The labeled data was used to train a machine learning model classifying videos into three classes (promoting, debunking, neutral) with the accuracy of 0.82. We use the trained model to classify the remaining videos that would not be feasible to annotate manually. Using both the manually and automatically annotated data, we observe the misinformation bubble dynamics for a range of audited topics. Our key finding is that even though filter bubbles do not appear in some situations, when they do, it is possible to burst them by watching misinformation debunking content (albeit it manifests differently from topic to topic). We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations. Finally, when comparing our results with a previous similar study, we do not observe significant improvements in the overall quantity of recommended misinformation content.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源