论文标题

具有学识渊博的规范功能的模棱两可

Equivariance with Learned Canonicalization Functions

论文作者

Kaba, Sékou-Oumar, Mondal, Arnab Kumar, Zhang, Yan, Bengio, Yoshua, Ravanbakhsh, Siamak

论文摘要

基于对称性的神经网络通常会限制体系结构,以实现不变性或对等效的转换。在本文中,我们提出了一种替代方案,可以通过学习产生数据的规范表示来避免这种架构约束。这些规范化功能可以很容易地插入非等级骨干架构中。我们提供明确的方法来为某些兴趣组实施它们。我们表明,这种方法在提供可解释的见解的同时享有普遍性。在我们的经验结果的支持下,我们的主要假设是,学习一个小型神经网络进行规范化比使用预定义的启发式方法更好。我们的实验表明,学习规范化功能与许多在许多任务中学习模棱两可函数的技术具有竞争力,包括图像分类,$ n $ - 体动力学预测,点云分类和部分分段,同时更快。

Symmetry-based neural networks often constrain the architecture in order to achieve invariance or equivariance to a group of transformations. In this paper, we propose an alternative that avoids this architectural constraint by learning to produce canonical representations of the data. These canonicalization functions can readily be plugged into non-equivariant backbone architectures. We offer explicit ways to implement them for some groups of interest. We show that this approach enjoys universality while providing interpretable insights. Our main hypothesis, supported by our empirical results, is that learning a small neural network to perform canonicalization is better than using predefined heuristics. Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks, including image classification, $N$-body dynamics prediction, point cloud classification and part segmentation, while being faster across the board.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源