论文标题

NUTA:行动识别的不均匀的时间聚集

NUTA: Non-uniform Temporal Aggregation for Action Recognition

论文作者

Li, Xinyu, Liu, Chunhui, Shuai, Bing, Zhu, Yi, Chen, Hao, Tighe, Joseph

论文摘要

在行动识别研究的世界中,一个主要重点是如何构建和训练网络以建模输入视频的空间量。这些方法通常会均匀地对输入夹(沿时间维度)进行采样。但是,并非视频的所有部分对于确定剪辑中的动作同样重要。在这项工作中,我们专注于学习在哪里提取功能,以专注于视频中最有用的部分。我们提出了一种称为非均匀时间聚集(NUTA)的方法,该方法仅来自信息性的时间段汇总。我们还引入了一种同步方法,该方法允许我们的Nuta功能与传统统一采样的视频功能保持一致,以便可以将本地和剪贴层的功能组合在一起。我们的模型已在四个广泛使用的大规模动作识别数据集(Kinetics400,Kinetics700,Something Speece V2和Charades)上实现了最先进的性能。此外,我们创建了一个可视化,以说明提出的NUTA方法仅选择视频剪辑中最相关的部分。

In the world of action recognition research, one primary focus has been on how to construct and train networks to model the spatial-temporal volume of an input video. These methods typically uniformly sample a segment of an input clip (along the temporal dimension). However, not all parts of a video are equally important to determine the action in the clip. In this work, we focus instead on learning where to extract features, so as to focus on the most informative parts of the video. We propose a method called the non-uniform temporal aggregation (NUTA), which aggregates features only from informative temporal segments. We also introduce a synchronization method that allows our NUTA features to be temporally aligned with traditional uniformly sampled video features, so that both local and clip-level features can be combined. Our model has achieved state-of-the-art performance on four widely used large-scale action-recognition datasets (Kinetics400, Kinetics700, Something-something V2 and Charades). In addition, we have created a visualization to illustrate how the proposed NUTA method selects only the most relevant parts of a video clip.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源