论文标题
固定记忆:学习概括语义细分
Pin the Memory: Learning to Generalize Semantic Segmentation
论文作者
论文摘要
深度神经网络的兴起导致了语义细分的几个突破。尽管如此,对源域进行训练的模型通常无法在新的挑战域中正常工作,这直接与模型的概括能力有关。在本文中,我们提出了一种基于元学习框架的语义分割的新型记忆引导的域概括方法。尤其是,我们的方法将语义类别的概念知识抽到了分类内存,这是超出域之外的恒定记忆。根据元学习概念,我们反复训练记忆引导的网络,并模拟虚拟测试1)学习如何记住域 - 不可思议的域名和类别的不同信息; 2)提供了一个外部沉降的记忆,作为一种班级贡献,以减少任意统一域的测试数据中表示的模棱两可。为此,我们还提出了内存差异和特征凝聚力损失,鼓励学习记忆阅读和更新类别感知的域概括的过程。语义分割的广泛实验表明,我们方法比在各种基准上的最先进的作品中具有出色的概括能力。
The rise of deep neural networks has led to several breakthroughs for semantic segmentation. In spite of this, a model trained on source domain often fails to work properly in new challenging domains, that is directly concerned with the generalization capability of the model. In this paper, we present a novel memory-guided domain generalization method for semantic segmentation based on meta-learning framework. Especially, our method abstracts the conceptual knowledge of semantic classes into categorical memory which is constant beyond the domains. Upon the meta-learning concept, we repeatedly train memory-guided networks and simulate virtual test to 1) learn how to memorize a domain-agnostic and distinct information of classes and 2) offer an externally settled memory as a class-guidance to reduce the ambiguity of representation in the test data of arbitrary unseen domain. To this end, we also propose memory divergence and feature cohesion losses, which encourage to learn memory reading and update processes for category-aware domain generalization. Extensive experiments for semantic segmentation demonstrate the superior generalization capability of our method over state-of-the-art works on various benchmarks.