论文标题

自动增强学习(AUTORL):调查和开放问题

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

论文作者

Parker-Holder, Jack, Rajan, Raghu, Song, Xingyou, Biedenkapp, André, Miao, Yingjie, Eimer, Theresa, Zhang, Baohe, Nguyen, Vu, Calandra, Roberto, Faust, Aleksandra, Hutter, Frank, Lindauer, Marius

论文摘要

加强学习(RL)与深度学习的结合导致了一系列令人印象深刻的壮举,许多人认为(深)RL为通往一般能力的代理提供了途径。但是,RL代理的成功通常对训练过程中的设计选择高度敏感,这可能需要乏味且容易出错的手动调整。这使得使用RL解决新问题是一项挑战,同时也限制了其全部潜力。在机器学习的许多其他领域,Automl表明可以自动化此类设计选择,并且在应用于RL时也可以产生有希望的初始结果。但是,自动增强学习(AUTORL)不仅涉及汽车的标准应用,而且还包括RL独特的其他挑战,这些挑战自然会产生不同的方法。因此,Autorl一直是RL研究的重要领域,在从RNA设计到玩游戏等各种应用程序中提供了希望。鉴于RL中考虑的方法和环境的多样性,大部分研究都是在不同的子领域进行的,从元学习到进化。在这项调查中,我们试图统一Autorl的领域,我们提供了共同的分类法,详细讨论每个领域,并带来开放的问题,这对研究人员来说将引起人们的关注。

The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems, while also limits its full potential. In many other areas of machine learning, AutoML has shown it is possible to automate such design choices and has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey we seek to unify the field of AutoRL, we provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源