论文标题
实体解决模型的有效解释
Effective Explanations for Entity Resolution Models
论文作者
论文摘要
实体分辨率(ER)的目的是匹配引用相同现实世界实体的记录。尽管在过去的50年中进行了广泛的研究,但ER仍然代表着一个具有挑战性的数据管理问题,最近的几项工作已经开始调查应用深度学习(DL)技术来解决此问题的机会。在本文中,我们研究了DL解决方案的基本问题。了解ER解决方案的匹配预测确实对于评估DL模型的可信度并发现其偏见至关重要。我们将DL模型视为黑匣子分类器,并且 - 以前的方法为DL预测提供解释是分类任务不可知的。我们提出了意识到ER问题语义的认证方法。我们的方法既产生显着性解释,它们将每个属性与显着性得分相关联,也可以将反事实解释相关联,这些解释提供了可以翻转预测的值的示例。 CERTA建立在一个概率框架的基础上,旨在计算通过使用输入记录的扰动副本来评估结果的解释。我们通过实验性地评估Certa对基于DL模型的最先进的解决方案的解释,并使用可公开的数据集评估了CERTA的解释,并证明了Certa对最近提出的此问题方法的有效性。
Entity resolution (ER) aims at matching records that refer to the same real-world entity. Although widely studied for the last 50 years, ER still represents a challenging data management problem, and several recent works have started to investigate the opportunity of applying deep learning (DL) techniques to solve this problem. In this paper, we study the fundamental problem of explainability of the DL solution for ER. Understanding the matching predictions of an ER solution is indeed crucial to assess the trustworthiness of the DL model and to discover its biases. We treat the DL model as a black box classifier and - while previous approaches to provide explanations for DL predictions are agnostic to the classification task. we propose the CERTA approach that is aware of the semantics of the ER problem. Our approach produces both saliency explanations, which associate each attribute with a saliency score, and counterfactual explanations, which provide examples of values that can flip the prediction. CERTA builds on a probabilistic framework that aims at computing the explanations evaluating the outcomes produced by using perturbed copies of the input records. We experimentally evaluate CERTA's explanations of state-of-the-art ER solutions based on DL models using publicly available datasets, and demonstrate the effectiveness of CERTA over recently proposed methods for this problem.