论文标题
Dropkey
DropKey
论文作者
论文摘要
在本文中,我们专注于分析和改进视觉变压器自我发项层的辍学技术,这很重要,同时令人惊讶地被先前的作品忽略了。特别是,我们对三个核心问题进行研究:首先,自我发挥层的下降是什么?不同于文献中的注意力重量不同,我们建议在注意矩阵计算之前向前移动辍学操作,并将钥匙设置为辍学单元,从而产生一种新颖的辍学 - fefer-softmax方案。从理论上讲,我们验证了该方案是否有助于保持注意力重量的正则化和概率特征,从而减轻了对特定模式的过度贴合问题,并增强了模型以捕获重要信息;第二,如何在连续层中安排下降比?与利用所有层的恒定下降率相反,我们提出了一个新的减少时间表,该计划逐渐降低了沿自我注意力层的堆叠比率。我们通过实验验证提出的时间表可以避免在低水平特征中过度贴合,并且在高级语义中缺失,从而提高了模型训练的稳健性和稳定性;第三,是否需要作为CNN执行结构化辍学操作?我们尝试基于补丁的辍学操作区块,发现CNN的这种有用的技巧对于VIT并不是必需的。考虑到以上三个问题的探索,我们提出了一种新颖的Dropkey方法,该方法将密钥视为下降单元和利用下降比的减少时间表,以一般的方式改善VIT。全面的实验证明了Dropkey对各种VIT体系结构的有效性,例如T2T和VOLO以及各种视觉任务,例如图像分类,对象检测,人类对象相互作用检测和人体形状恢复。
In this paper, we focus on analyzing and improving the dropout technique for self-attention layers of Vision Transformer, which is important while surprisingly ignored by prior works. In particular, we conduct researches on three core questions: First, what to drop in self-attention layers? Different from dropping attention weights in literature, we propose to move dropout operations forward ahead of attention matrix calculation and set the Key as the dropout unit, yielding a novel dropout-before-softmax scheme. We theoretically verify that this scheme helps keep both regularization and probability features of attention weights, alleviating the overfittings problem to specific patterns and enhancing the model to globally capture vital information; Second, how to schedule the drop ratio in consecutive layers? In contrast to exploit a constant drop ratio for all layers, we present a new decreasing schedule that gradually decreases the drop ratio along the stack of self-attention layers. We experimentally validate the proposed schedule can avoid overfittings in low-level features and missing in high-level semantics, thus improving the robustness and stableness of model training; Third, whether need to perform structured dropout operation as CNN? We attempt patch-based block-version of dropout operation and find that this useful trick for CNN is not essential for ViT. Given exploration on the above three questions, we present the novel DropKey method that regards Key as the drop unit and exploits decreasing schedule for drop ratio, improving ViTs in a general way. Comprehensive experiments demonstrate the effectiveness of DropKey for various ViT architectures, e.g. T2T and VOLO, as well as for various vision tasks, e.g., image classification, object detection, human-object interaction detection and human body shape recovery.