论文标题
DQN学习吗?
Does DQN Learn?
论文作者
论文摘要
任何强化学习方法的主要要求是,它应该制定对初始猜测进行改进的政策。在这项工作中,我们表明广泛使用的深Q网络(DQN)无法满足最小的标准 - 即使它可以无限地看到所有可能的状态和行动(在这种情况下,保证表达式Q-学习的条件可以融合到最佳Q值函数)。我们的具体贡献是双重的。首先,我们从数字上表明DQN通常返回的策略比初始的策略要差。其次,我们在线性DQN中为这种现象提供了理论解释,DQN的简化版本使用线性函数近似代替神经网络,同时保留其他关键组件,例如$ε$ - 否决探索,体验经验重播和目标网络。使用来自差分包含理论的工具,我们证明了线性DQN的限制点对应于预计的Bellman操作员的固定点。至关重要的是,我们表明这些固定点不需要与最佳(甚至几乎是最佳的)策略有关,从而解释了线性DQN的亚最佳行为。我们还提供了线性DQN始终确定最坏政策的方案。我们的工作填补了与功能近似和$ε$ - 梅迪探索的Q学习的融合行为的长期差距。
A primary requirement for any reinforcement learning method is that it should produce policies that improve upon the initial guess. In this work, we show that the widely used Deep Q-Network (DQN) fails to satisfy this minimal criterion -- even when it gets to see all possible states and actions infinitely often (a condition under which tabular Q-learning is guaranteed to converge to the optimal Q-value function). Our specific contributions are twofold. First, we numerically show that DQN often returns a policy that performs worse than the initial one. Second, we offer a theoretical explanation for this phenomenon in linear DQN, a simplified version of DQN that uses linear function approximation in place of neural networks while retaining the other key components such as $ε$-greedy exploration, experience replay, and target network. Using tools from differential inclusion theory, we prove that the limit points of linear DQN correspond to fixed points of projected Bellman operators. Crucially, we show that these fixed points need not relate to optimal -- or even near-optimal -- policies, thus explaining linear DQN's sub-optimal behaviors. We also give a scenario where linear DQN always identifies the worst policy. Our work fills a longstanding gap in understanding the convergence behaviors of Q-learning with function approximation and $ε$-greedy exploration.