论文标题
通过不变的对准的模棱两可的转导
Equivariant Transduction through Invariant Alignment
论文作者
论文摘要
在构图上概括的能力是了解只有有限数量的单词以人类语言构建的潜在无限句子数量的关键。调查NLP模型是否具有这种能力一直是一个有趣的话题:Scan(Lake and Baroni,2018年)是一项专门针对该物业测试的任务。先前的工作使用了一个自然编码扫描的有用的电感偏置的群体等级神经网络实现了令人印象深刻的经验结果(Gordon等,2020)。受此启发的启发,我们引入了一种新型的群体等级结构,该结构结合了群体不变的硬对准机制。我们发现,与现有的群体等级方法相比,我们的网络结构使其能够开发出更强的白毒属性。我们还发现,在扫描任务上,它的表现优于先前的组等级网络。我们的结果表明,将群体等级性整合到各种神经体系结构中是一种潜在的研究途径,并证明了对此类架构的理论特性进行仔细分析的价值。
The ability to generalize compositionally is key to understanding the potentially infinite number of sentences that can be constructed in a human language from only a finite number of words. Investigating whether NLP models possess this ability has been a topic of interest: SCAN (Lake and Baroni, 2018) is one task specifically proposed to test for this property. Previous work has achieved impressive empirical results using a group-equivariant neural network that naturally encodes a useful inductive bias for SCAN (Gordon et al., 2020). Inspired by this, we introduce a novel group-equivariant architecture that incorporates a group-invariant hard alignment mechanism. We find that our network's structure allows it to develop stronger equivariance properties than existing group-equivariant approaches. We additionally find that it outperforms previous group-equivariant networks empirically on the SCAN task. Our results suggest that integrating group-equivariance into a variety of neural architectures is a potentially fruitful avenue of research, and demonstrate the value of careful analysis of the theoretical properties of such architectures.