论文标题
何时及如何通过转移学习转移
When & How to Transfer with Transfer Learning
论文作者
论文摘要
在深度学习中,转移学习(TL)在处理相关任务时已成为事实上的方法。已经证明,为一项任务学习的视觉功能可重复使用其他任务,从而显着提高了性能。通过重复使用深度表示,TL可以在数据可用性有限,计算资源和/或对人类专家的访问有限的域中使用深层模型。包括绝大多数现实生活应用的领域。本文对TL进行了实验评估,探讨了其性能,环境足迹,人工时间和计算要求方面的权衡。结果强调了案例是一种廉价的特征提取方法,在这种情况下,昂贵的微调工作可能值得增加成本的情况。最后,提出了一套有关使用TL的准则。
In deep learning, transfer learning (TL) has become the de facto approach when dealing with image related tasks. Visual features learnt for one task have been shown to be reusable for other tasks, improving performance significantly. By reusing deep representations, TL enables the use of deep models in domains with limited data availability, limited computational resources and/or limited access to human experts. Domains which include the vast majority of real-life applications. This paper conducts an experimental evaluation of TL, exploring its trade-offs with respect to performance, environmental footprint, human hours and computational requirements. Results highlight the cases were a cheap feature extraction approach is preferable, and the situations where an expensive fine-tuning effort may be worth the added cost. Finally, a set of guidelines on the use of TL are proposed.