论文标题

生成动态场景的长期视频

Generating Long Videos of Dynamic Scenes

论文作者

Brooks, Tim, Hellsten, Janne, Aittala, Miika, Wang, Ting-Chun, Aila, Timo, Lehtinen, Jaakko, Liu, Ming-Yu, Efros, Alexei A., Karras, Tero

论文摘要

我们提出了一个视频生成模型,该模型可以准确地重现对象运动,摄像头视点的变化以及随着时间的推移而产生的新内容。现有的视频生成方法通常无法产生新内容作为时间的函数,同时保持在真实环境中预期的一致性,例如合理的动态和对象持久性。一个常见的故障情况是,由于过度依赖归纳偏见而无法提供时间一致性,因此内容永远不会改变,例如单个潜在代码决定整个视频的内容。在另一个极端,没有长期一致性,生成的视频可能会在不同场景之间不切实际。为了解决这些限制,我们通过重新设计暂时的潜在表示并通过较长的视频培训从数据中学习长期一致性来优先考虑时间轴。为此,我们利用了两阶段的培训策略,在该策略中,我们以低分辨率和高分辨率的较短视频分别训练了较长的视频。为了评估模型的功能,我们介绍了两个新的基准数据集,并明确关注长期时间动态。

We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive biases to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. To this end, we leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源