论文标题
LEEP:评估学习表示的可转让性的新措施
LEEP: A New Measure to Evaluate Transferability of Learned Representations
论文作者
论文摘要
我们引入了一种新措施,以评估分类器学到的表示形式的可转让性。我们的度量是日志预期的经验预测(LEEP),简单易于计算:当给予对源数据集训练的分类器时,仅需要一次通过此分类器运行目标数据集。我们从理论上分析LEEP的特性,并在经验上证明其有效性。我们的分析表明,LEEP可以预测转移和元转移学习方法的性能和收敛速度,即使对于小或不平衡的数据也是如此。此外,LEEP胜过最近提出的可转移性度量,例如负条件熵和H分数。值得注意的是,就与最佳竞争方法而言,从Imagenet转移到CIFAR100时,与实际传输精度相关性的方法相比,LEEP可以提高30%。
We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.