论文标题
对批处理增白的随机性的调查
An Investigation into the Stochasticity of Batch Whitening
论文作者
论文摘要
通过在迷你批次内执行标准化,在各种网络架构中广泛使用了批归归量化(BN)。 对这一过程的充分理解一直是深度学习社区的核心目标。 与通常仅分析标准化操作的现有作品不同,本文研究了更一般的批次美白(BW)。我们的工作源于这样的观察,即尽管各种美白转化等效地改善了调节,但它们在歧视性场景和训练生成性对抗网络(GAN)中显示出明显不同的行为。 我们将这种现象归因于BW引入的随机性。 我们定量地研究了不同的美白转化的随机性,并表明它与训练过程中的优化行为很好地相关。 我们还研究了随机性与推断期间人口统计的估计的关系。 基于我们的分析,我们为在不同情况下设计和比较BW算法提供了一个框架。 我们提出的BW算法通过图像网分类的显着余量改善了残差网络。 此外,我们表明,BW的随机性可以通过牺牲训练稳定性来改善GAN的性能。
Batch Normalization (BN) is extensively employed in various network architectures by performing standardization within mini-batches. A full understanding of the process has been a central target in the deep learning communities. Unlike existing works, which usually only analyze the standardization operation, this paper investigates the more general Batch Whitening (BW). Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs). We attribute this phenomenon to the stochasticity that BW introduces. We quantitatively investigate the stochasticity of different whitening transformations and show that it correlates well with the optimization behaviors during training. We also investigate how stochasticity relates to the estimation of population statistics during inference. Based on our analysis, we provide a framework for designing and comparing BW algorithms in different scenarios. Our proposed BW algorithm improves the residual networks by a significant margin on ImageNet classification. Besides, we show that the stochasticity of BW can improve the GAN's performance with, however, the sacrifice of the training stability.