论文标题
部分可观测时空混沌系统的无模型预测
Efficient Global Robustness Certification of Neural Networks via Interleaving Twin-Network Encoding
论文作者
论文摘要
最近,深神经网络的鲁棒性最近引起了人们的重大兴趣,尤其是在安全至关重要的系统中部署时,因为分析模型输出在输入扰动下的敏感程度很重要。尽管大多数以前的作品都集中在输入样本周围的局部鲁棒性属性上,但仍缺乏全球鲁棒性属性的研究,该研究仍缺乏整个输入空间的扰动下的最大输出变化。在这项工作中,我们制定了具有RELU激活功能的神经网络的全球鲁棒性认证,作为混合工作者线性编程(MILP)问题,并提出了一种有效的方法来解决它。我们的方法包括一种新型的交织双网络编码方案,其中两个神经网络的两份副本并排编码,其中添加了额外的交织依赖关系,及其过度认可算法利用放松和精炼技术,以降低复杂性。与先前的全球鲁棒性认证方法相比,实验证明了我们工作的时机效率以及我们过度评价的紧密度。进行了闭环控制安全验证的案例研究,并证明了我们方法在证明神经网络在安全至关重要系统中的全球鲁棒性方面的重要性和实用性。
The robustness of deep neural networks has received significant interest recently, especially when being deployed in safety-critical systems, as it is important to analyze how sensitive the model output is under input perturbations. While most previous works focused on the local robustness property around an input sample, the studies of the global robustness property, which bounds the maximum output change under perturbations over the entire input space, are still lacking. In this work, we formulate the global robustness certification for neural networks with ReLU activation functions as a mixed-integer linear programming (MILP) problem, and present an efficient approach to address it. Our approach includes a novel interleaving twin-network encoding scheme, where two copies of the neural network are encoded side-by-side with extra interleaving dependencies added between them, and an over-approximation algorithm leveraging relaxation and refinement techniques to reduce complexity. Experiments demonstrate the timing efficiency of our work when compared with previous global robustness certification methods and the tightness of our over-approximation. A case study of closed-loop control safety verification is conducted, and demonstrates the importance and practicality of our approach for certifying the global robustness of neural networks in safety-critical systems.