论文标题

自然语言处理的元学习:调查

Meta Learning for Natural Language Processing: A Survey

论文作者

Lee, Hung-yi, Li, Shang-Wen, Vu, Ngoc Thang

论文摘要

深度学习一直是自然语言处理(NLP)领域的主流技术。但是,这些技术需要许多标记的数据,并且在整个域之间不太概括。元学习是机器学习研究方法的一个领域,以学习更好的学习算法。方法旨在改善各个方面的算法,包括数据效率和概括性。在许多NLP任务中已经显示出方法的功效,但是在NLP中没有系统的调查,这阻碍了更多的研究人员加入该领域。我们使用这篇调查文件的目标是为研究人员提供NLP中相关的元学习作品的指针,并吸引NLP社区的更多关注以推动未来的创新。本文首先介绍了元学习和共同方法的一般概念。然后,我们总结了任务构建设置和用于各种NLP问题的元学习的应用,并审查NLP社区中元学习的发展。

Deep learning has been the mainstream technique in natural language processing (NLP) area. However, the techniques require many labeled data and are less generalizable across domains. Meta-learning is an arising field in machine learning studying approaches to learn better learning algorithms. Approaches aim at improving algorithms in various aspects, including data efficiency and generalizability. Efficacy of approaches has been shown in many NLP tasks, but there is no systematic survey of these approaches in NLP, which hinders more researchers from joining the field. Our goal with this survey paper is to offer researchers pointers to relevant meta-learning works in NLP and attract more attention from the NLP community to drive future innovation. This paper first introduces the general concepts of meta-learning and the common approaches. Then we summarize task construction settings and application of meta-learning for various NLP problems and review the development of meta-learning in NLP community.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源