论文标题
不同的深度学习损失是否会导致相似的学术特征?
Do Different Deep Metric Learning Losses Lead to Similar Learned Features?
论文作者
论文摘要
最近的研究表明,在相同的实验条件下,许多深度度量学习损失功能的表现非常相似。这一意外结果的一个潜在原因是,所有损失都使网络专注于相似的图像区域或属性。在本文中,我们通过进行两步分析来提取和比较具有不同损耗功能的同一模型体系结构的学习视觉特征来调查这一点:首先,我们通过将相同输入图像的显着性图相关联在像素级别上进行比较。其次,我们比较了几种图像属性的嵌入聚类的聚类,例如物体颜色或照明。为了提供对这些属性的独立控制,生成了类似于CARS196数据集中图像的照片现实3D CAR渲染。在我们的分析中,我们比较了最近的一项研究中的14个预处理的模型,并发现尽管所有模型都相似,但不同的损失功能可以指导模型学习不同的功能。我们特别发现分类和基于排名的损失之间的差异。我们的分析还表明,某些看似无关的特性可能会对所得嵌入产生重大影响。我们鼓励深度度量学习界的研究人员使用我们的方法来了解他们所提出的方法所学的特征。
Recent studies have shown that many deep metric learning loss functions perform very similarly under the same experimental conditions. One potential reason for this unexpected result is that all losses let the network focus on similar image regions or properties. In this paper, we investigate this by conducting a two-step analysis to extract and compare the learned visual features of the same model architecture trained with different loss functions: First, we compare the learned features on the pixel level by correlating saliency maps of the same input images. Second, we compare the clustering of embeddings for several image properties, e.g. object color or illumination. To provide independent control over these properties, photo-realistic 3D car renders similar to images in the Cars196 dataset are generated. In our analysis, we compare 14 pretrained models from a recent study and find that, even though all models perform similarly, different loss functions can guide the model to learn different features. We especially find differences between classification and ranking based losses. Our analysis also shows that some seemingly irrelevant properties can have significant influence on the resulting embedding. We encourage researchers from the deep metric learning community to use our methods to get insights into the features learned by their proposed methods.