论文标题

负转移的调查

A Survey on Negative Transfer

论文作者

Zhang, Wen, Deng, Lingfei, Zhang, Lei, Wu, Dongrui

论文摘要

转移学习(TL)从一个或多个源域中利用数据或知识来促进目标域中的学习。由于注释费用,隐私问题等,目标域几乎没有或没有标记的数据时,这一点特别有用。不幸的是,并非总是保证TL的有效性。负转移(NT),即,利用源域数据/知识不必要地降低了目标域中的学习绩效,一直是TL的长期且具有挑战性的问题。文献中已经提出了各种方法来处理它。但是,对于NT的配方,导致NT的因素以及减轻NT的算法,没有进行系统的调查。本文首先介绍了NT及其因素的定义,填补了这一空白,然后根据四个类别审查了克服NT的五十种代表性方法:安全转移,域相似性估计,远处传输和NT缓解措施。还讨论了相关领域的NT,例如,还讨论了多任务学习,终身学习和对抗性攻击。

Transfer learning (TL) utilizes data or knowledge from one or more source domains to facilitate the learning in a target domain. It is particularly useful when the target domain has very few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., leveraging source domain data/knowledge undesirably reduces the learning performance in the target domain, has been a long-standing and challenging problem in TL. Various approaches have been proposed in the literature to handle it. However, there does not exist a systematic survey on the formulation of NT, the factors leading to NT, and the algorithms that mitigate NT. This paper fills this gap, by first introducing the definition of NT and its factors, then reviewing about fifty representative approaches for overcoming NT, according to four categories: secure transfer, domain similarity estimation, distant transfer, and NT mitigation. NT in related fields, e.g., multi-task learning, lifelong learning, and adversarial attacks, are also discussed.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源