论文标题
新的神经网络平滑度约束的案例
A case for new neural network smoothness constraints
论文作者
论文摘要
机器学习模型对输入更改的敏感程度如何?我们解决了模型平滑度的问题,并表明它是一种有用的归纳偏见,有助于概括,对抗性鲁棒性,生成建模和增强学习。我们探讨了施加平滑度约束的当前方法,并观察到它们缺乏适应新任务的灵活性,他们不考虑数据模式,而是以尚未完全理解的方式与损失,体系结构和优化相互作用。我们得出的结论是,该领域的新进步正在促进寻找将数据,任务和学习纳入我们的平滑定义中的方法。
How sensitive should machine learning models be to input changes? We tackle the question of model smoothness and show that it is a useful inductive bias which aids generalization, adversarial robustness, generative modeling and reinforcement learning. We explore current methods of imposing smoothness constraints and observe they lack the flexibility to adapt to new tasks, they don't account for data modalities, they interact with losses, architectures and optimization in ways not yet fully understood. We conclude that new advances in the field are hinging on finding ways to incorporate data, tasks and learning into our definitions of smoothness.