论文标题
通过凸优化认证神经网络的增量二次约束
Certifying Incremental Quadratic Constraints for Neural Networks via Convex Optimization
论文作者
论文摘要
在分析神经网络分类器的分析中,对他们对输入和输出的约束进行限制,将神经网络提出限制,并得出基于优化的算法,以证明涉及神经网络的反馈系统的稳定性和鲁棒性。在本文中,我们以线性矩阵不平等(LMI)的形式提出了一个凸面程序,以证明在感兴趣的区域上,在神经网络地图上对增量二次约束。这些证书可以捕获几种有用的属性,例如(本地)Lipschitz的连续性,单面Lipschitz的连续性,可逆性和收缩。我们在两个不同的设置中说明了我们方法的实用性。首先,我们开发了一个半决赛程序,以计算神经网络本地Lipschitz常数的保证和尖锐的上限,并说明了随机网络上的结果以及对MNIST训练的网络。其次,我们考虑了一个线性时间不变的系统反馈中,具有通过神经网络参数为参数的近似模型预测控制器。然后,我们将稳定性分析转变为一个半决赛的可行性程序,并估计为闭环系统设置的椭圆形不变性。
Abstracting neural networks with constraints they impose on their inputs and outputs can be very useful in the analysis of neural network classifiers and to derive optimization-based algorithms for certification of stability and robustness of feedback systems involving neural networks. In this paper, we propose a convex program, in the form of a Linear Matrix Inequality (LMI), to certify incremental quadratic constraints on the map of neural networks over a region of interest. These certificates can capture several useful properties such as (local) Lipschitz continuity, one-sided Lipschitz continuity, invertibility, and contraction. We illustrate the utility of our approach in two different settings. First, we develop a semidefinite program to compute guaranteed and sharp upper bounds on the local Lipschitz constant of neural networks and illustrate the results on random networks as well as networks trained on MNIST. Second, we consider a linear time-invariant system in feedback with an approximate model predictive controller parameterized by a neural network. We then turn the stability analysis into a semidefinite feasibility program and estimate an ellipsoidal invariant set for the closed-loop system.