论文标题

通过广义输入 - convex神经网络学习不变

Learning Invariances with Generalised Input-Convex Neural Networks

论文作者

Nesterov, Vitali, Torres, Fabricio Arend, Nagy-Huber, Monika, Samarin, Maxim, Roth, Volker

论文摘要

考虑到从输入向量到连续目标的平滑映射,我们的目标是表征输入域的子空间,这些子空间在此类映射下是不变的。因此,我们想表征由级别集的隐式定义的歧管。具体而言,此特征应具有全局参数形式,该形式对于不同的知情数据探索任务特别有用,例如基于网格的近似值,沿级别曲线采样点或在歧管上找到轨迹。但是,只有在连接级别集合时才能存在全局参数化。为此,我们介绍了一类新颖而灵活的神经网络,这些神经网络概括了输入 - 凸网络。这些网络代表保证具有连接级别集的函数,在输入空间上形成平滑的歧管。我们进一步表明,这些级别集的全局参数化始终可以有效地找到。最后,我们证明了我们用于表征Invariances的新技术是在现实世界应用中(例如计算化学)中强大的生成数据探索工具。

Considering smooth mappings from input vectors to continuous targets, our goal is to characterise subspaces of the input domain, which are invariant under such mappings. Thus, we want to characterise manifolds implicitly defined by level sets. Specifically, this characterisation should be of a global parametric form, which is especially useful for different informed data exploration tasks, such as building grid-based approximations, sampling points along the level curves, or finding trajectories on the manifold. However, global parameterisations can only exist if the level sets are connected. For this purpose, we introduce a novel and flexible class of neural networks that generalise input-convex networks. These networks represent functions that are guaranteed to have connected level sets forming smooth manifolds on the input space. We further show that global parameterisations of these level sets can be always found efficiently. Lastly, we demonstrate that our novel technique for characterising invariances is a powerful generative data exploration tool in real-world applications, such as computational chemistry.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源