论文标题
假新闻的空间游戏
Spatial Games of Fake News
论文作者
论文摘要
为了遏制虚假新闻在社交媒体平台上的传播,最近的研究将一种在线众包事实检验方法视为减少错误信息的一种可能干预方法。但是,在什么条件下尚不清楚众包事实核对工作阻止了错误信息的传播。为了解决这个问题,我们将这些分布式事实检查(如“同行警务”)建模,这些事实检查将减少所感知到的分享或传播虚假信息(假新闻)的回报,并奖励可信赖的信息的传播(真实新闻)。通过模拟我们的模型在合成方形晶格和小世界网络上,我们表明社交网络结构的存在使假新闻播放器可以自组织为Echo Chambers,从而提高了假新闻的效力,从而促进了其对事实检查工作的抵制。此外,为了在更现实的环境中研究我们的模型,我们利用Twitter网络数据集并研究故意选择特定个人作为事实检查者的有效性。我们发现,有针对性的事实核对工作可以非常有效,看到成功的水平仅是事实检查者数量的五分之一,但这取决于相关网络的结构。在弱选择的限制下,我们从事实检查器/假新闻游戏中的回报值方面获得了众包事实检查的关键阈值的封闭形式的分析条件。我们的工作对开发基于模型的缓解策略的实际意义,以控制干扰政治话语的错误信息的传播。
To curb the spread of fake news on social media platforms, recent studies have considered an online crowdsourcing fact-checking approach as one possible intervention method to reduce misinformation. However, it remains unclear under what conditions crowdsourcing fact-checking efforts deter the spread of misinformation. To address this issue, we model such distributed fact-checking as `peer policing' that will reduce the perceived payoff to share or disseminate false information (fake news) and also reward the spread of trustworthy information (real news). By simulating our model on synthetic square lattices and small-world networks, we show that the presence of social network structure enables fake news spreaders to be self-organized into echo chambers, thereby providing a boost to the efficacy of fake news and thus its resistance to fact-checking efforts. Additionally, to study our model in a more realistic setting, we utilize a Twitter network dataset and study the effectiveness of deliberately choosing specific individuals to be fact-checkers. We find that targeted fact-checking efforts can be highly effective, seeing the same level of success with as little as a fifth of the number of fact-checkers, but it depends on the structure of the network in question. In the limit of weak selection, we obtain closed-form analytical conditions for critical threshold of crowdsourced fact-checking in terms of the payoff values in our fact-checker/fake news game. Our work has practical implications for developing model-based mitigation strategies for controlling the spread of misinformation that interferes with the political discourse.