论文标题
持续无监督的域适应语义分割
Continual Unsupervised Domain Adaptation for Semantic Segmentation
论文作者
论文摘要
用于语义分割的无监督域适应性(UDA)已被有利地应用于很难获得像素级标签的实际场景中。在大多数现有的UDA方法中,假定所有目标数据同时引入。但是,数据通常在现实世界中依次呈现。此外,尚未积极探索连续的UDA,它在不断学习的环境中处理了具有多个目标域的更实用的方案。从这个角度来看,我们提出了基于新设计的扩展目标特定内存(ETM)框架的语义分割的连续UDA。我们的新型ETM框架包含针对每个目标域的目标特异性内存(TM),以减轻灾难性的遗忘。此外,提议的双铰链对抗(DHA)损失使网络总体上产生更好的UDA性能。我们对TM和培训目标的设计使语义分割网络适应当前的目标域,同时保留对先前目标域中学习的知识。带有拟议框架的模型在连续学习设置(例如GTA5,Synthia,CityScapes,IDD和跨城市数据集)上的持续学习设置中优于其他最先进的模型。源代码可在https://github.com/joonh-kim/etm上找到。
Unsupervised Domain Adaptation (UDA) for semantic segmentation has been favorably applied to real-world scenarios in which pixel-level labels are hard to be obtained. In most of the existing UDA methods, all target data are assumed to be introduced simultaneously. Yet, the data are usually presented sequentially in the real world. Moreover, Continual UDA, which deals with more practical scenarios with multiple target domains in the continual learning setting, has not been actively explored. In this light, we propose Continual UDA for semantic segmentation based on a newly designed Expanding Target-specific Memory (ETM) framework. Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting. Furthermore, a proposed Double Hinge Adversarial (DHA) loss leads the network to produce better UDA performance overall. Our design of the TM and training objectives let the semantic segmentation network adapt to the current target domain while preserving the knowledge learned on previous target domains. The model with the proposed framework outperforms other state-of-the-art models in continual learning settings on standard benchmarks such as GTA5, SYNTHIA, CityScapes, IDD, and Cross-City datasets. The source code is available at https://github.com/joonh-kim/ETM.