论文标题
学习通过动态上下文来预测未来
Learning to Anticipate Future with Dynamic Context Removal
论文作者
论文摘要
预期未来的事件是智能系统和体现AI的重要功能。但是,与传统的识别任务相比,未来和推理能力要求的不确定性使预期任务非常具有挑战性,并且远远超出了解决。在此文件中,以前的方法通常更关心模型体系结构设计,或者很少关注如何使用适当的学习政策培训预期模型。为此,在这项工作中,我们提出了一种名为“动态上下文删除”(DCR)的新型培训方案,该方案动态安排了学习过程中观察到的未来的可见性。它遵循类似人类的课程学习过程,即逐渐消除事件上下文以增加预期难度,直到满足最终预期目标。我们的学习方案是插件,易于整合包括变压器和LSTM在内的任何推理模型,具有有效性和效率的优势。在广泛的实验中,提出的方法在四个广泛使用的基准上实现了最先进的方法。我们的代码和模型将在https://github.com/allenxuuu/dcr上公开发布。
Anticipating future events is an essential feature for intelligent systems and embodied AI. However, compared to the traditional recognition task, the uncertainty of future and reasoning ability requirement make the anticipation task very challenging and far beyond solved. In this filed, previous methods usually care more about the model architecture design or but few attention has been put on how to train an anticipation model with a proper learning policy. To this end, in this work, we propose a novel training scheme called Dynamic Context Removal (DCR), which dynamically schedules the visibility of observed future in the learning procedure. It follows the human-like curriculum learning process, i.e., gradually removing the event context to increase the anticipation difficulty till satisfying the final anticipation target. Our learning scheme is plug-and-play and easy to integrate any reasoning model including transformer and LSTM, with advantages in both effectiveness and efficiency. In extensive experiments, the proposed method achieves state-of-the-art on four widely-used benchmarks. Our code and models are publicly released at https://github.com/AllenXuuu/DCR.