论文标题
对风险敏感的政策优化方法
A Risk-Sensitive Approach to Policy Optimization
论文作者
论文摘要
标准深度强化学习(DRL)旨在最大程度地提高预期奖励,考虑到制定政策方面的经验。这与人类决策不同,在人类的决策中,对收益和损失的重视程度有所不同,而外围的结果被越来越多。它也无法利用通过合并分配环境来提高安全性和/或绩效的机会。已经研究了几种分配DRL的方法,其中一种流行的策略是评估可能采取行动的收益分配。我们提出了一种更直接的方法,通过优化,根据全剧集奖励的分布累积分布函数(CDF)指定的风险敏感目标得到了优化。这种方法允许根据相对质量权衡结果,可用于连续和离散的动作空间,并且自然可以在约束和不受约束的设置中应用。我们展示了如何通过抽样来计算广泛的风险敏感目标的策略梯度的渐近一致估计,随后纳入了降低方差和正则化措施,以促进有效的实质性学习。然后,我们证明使用中等“悲观”的风险概况,该风险概况强调了代理商的性能较差的情况,可以增强探索,并持续着重于解决缺陷。我们在六个OpenAI安全健身房环境中使用不同的风险配置文件测试了该方法,与最先进的政策方法相比。没有成本限制,我们发现悲观的风险概况可用于降低成本,同时改善总奖励积累。借助成本限制,他们可以以规定的允许成本提供比风险中立的方法更高的积极奖励。
Standard deep reinforcement learning (DRL) aims to maximize expected reward, considering collected experiences equally in formulating a policy. This differs from human decision-making, where gains and losses are valued differently and outlying outcomes are given increased consideration. It also fails to capitalize on opportunities to improve safety and/or performance through the incorporation of distributional context. Several approaches to distributional DRL have been investigated, with one popular strategy being to evaluate the projected distribution of returns for possible actions. We propose a more direct approach whereby risk-sensitive objectives, specified in terms of the cumulative distribution function (CDF) of the distribution of full-episode rewards, are optimized. This approach allows for outcomes to be weighed based on relative quality, can be used for both continuous and discrete action spaces, and may naturally be applied in both constrained and unconstrained settings. We show how to compute an asymptotically consistent estimate of the policy gradient for a broad class of risk-sensitive objectives via sampling, subsequently incorporating variance reduction and regularization measures to facilitate effective on-policy learning. We then demonstrate that the use of moderately "pessimistic" risk profiles, which emphasize scenarios where the agent performs poorly, leads to enhanced exploration and a continual focus on addressing deficiencies. We test the approach using different risk profiles in six OpenAI Safety Gym environments, comparing to state of the art on-policy methods. Without cost constraints, we find that pessimistic risk profiles can be used to reduce cost while improving total reward accumulation. With cost constraints, they are seen to provide higher positive rewards than risk-neutral approaches at the prescribed allowable cost.