论文标题

模型间的解释性:自我监督模型作为案例研究

Inter-model Interpretability: Self-supervised Models as a Case Study

论文作者

Mustapha, Ahmad, Khreich, Wael, Masri, Wes

论文摘要

由于早期的机器学习模型,诸如准确性和精确度等指标已成为评估和比较训练有素模型的事实上的方法。但是,单个度量号并未完全捕获模型之间的相似性和差异,尤其是在计算机视觉域中。在某个数据集上具有很高精度的模型可能会在另一个数据集上提供较低的精度,而无需任何进一步的见解。为了解决这个问题,我们基于一种称为Disect的最新可解释性技术,以引入\ textit {模型可解释性},该技术根据他们所学的视觉概念(例如对象和材料)来确定模型如何相互联系或补充。为了实现这一目标,我们将13个表现最佳的自制模型投射到一个学习的概念(LCE)空间中,该模型从学到的概念的角度揭示了模型之间的近距离。我们将这些模型的性能进一步跨越了四个计算机视觉任务和15个数据集。该实验使我们能够将模型分为三类,并首次揭示了不同任务所需的视觉概念类型。这是设计跨任务学习算法的一步。

Since early machine learning models, metrics such as accuracy and precision have been the de facto way to evaluate and compare trained models. However, a single metric number doesn't fully capture the similarities and differences between models, especially in the computer vision domain. A model with high accuracy on a certain dataset might provide a lower accuracy on another dataset, without any further insights. To address this problem we build on a recent interpretability technique called Dissect to introduce \textit{inter-model interpretability}, which determines how models relate or complement each other based on the visual concepts they have learned (such as objects and materials). Towards this goal, we project 13 top-performing self-supervised models into a Learned Concepts Embedding (LCE) space that reveals proximities among models from the perspective of learned concepts. We further crossed this information with the performance of these models on four computer vision tasks and 15 datasets. The experiment allowed us to categorize the models into three categories and revealed for the first time the type of visual concepts different tasks requires. This is a step forward for designing cross-task learning algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源