论文标题

了解自然变化下深神经网络的局部鲁棒性

Understanding Local Robustness of Deep Neural Networks under Natural Variations

论文作者

Zhong, Ziyuan, Tian, Yuchi, Ray, Baishakhi

论文摘要

当今,深层神经网络(DNN)正在各种环境中部署,从自动驾驶(例如自动驾驶)到涉及图像分类的商业应用程序。但是,最近的研究表明,DNN可以脆弱,甚至可以稍微变化输入数据。因此,对DNN的严格测试已引起广泛关注。 尽管在过去几年中,在规范扰动下的DNN鲁棒性受到了极大的关注,但当输入图像的自然变体出现时,我们的知识仍然受到限制。这些天然变体,例如原始输入的旋转或多雨版本特别令人担忧,因为它们可以自然地发生在田野中而没有任何活跃的对手,并且可能导致不良后果。因此,重要的是要确定其小变化可能导致错误的DNN行为的输入。但是,很少有研究DNN在自然变体下的鲁棒性的研究,重点是估计所有测试数据中DNN的整体鲁棒性,而不是本地化此类错误产生点。这项工作旨在弥合这一差距。 为此,我们研究了DNNS的局部每输入鲁棒性特性,并利用这些属性来构建白色框(deeprobust-w)和一个黑盒(deeprobust-b)工具,以自动识别非稳定点。我们对涵盖三个广泛使用图像分类数据集的三个DNN模型的这些方法的评估表明,它们在稳健性差的点上有效。特别是,DeepRobust-W和Deeprobust-B能够达到最高91.4%和99.1%的F1分数。我们进一步表明,DeepRobust-W可以应用于另一个域中的回归问题。我们对三种自动驾驶汽车模型的评估表明,DeepRobust-W有效地识别稳健性差的点,而F1得分高达78.9%。

Deep Neural Networks (DNNs) are being deployed in a wide range of settings today, from safety-critical applications like autonomous driving to commercial applications involving image classifications. However, recent research has shown that DNNs can be brittle to even slight variations of the input data. Therefore, rigorous testing of DNNs has gained widespread attention. While DNN robustness under norm-bound perturbation got significant attention over the past few years, our knowledge is still limited when natural variants of the input images come. These natural variants, e.g. a rotated or a rainy version of the original input, are especially concerning as they can occur naturally in the field without any active adversary and may lead to undesirable consequences. Thus, it is important to identify the inputs whose small variations may lead to erroneous DNN behaviors. The very few studies that looked at DNN's robustness under natural variants, however, focus on estimating the overall robustness of DNNs across all the test data rather than localizing such error-producing points. This work aims to bridge this gap. To this end, we study the local per-input robustness properties of the DNNs and leverage those properties to build a white-box (DeepRobust-W) and a black-box (DeepRobust-B) tool to automatically identify the non-robust points. Our evaluation of these methods on three DNN models spanning three widely used image classification datasets shows that they are effective in flagging points of poor robustness. In particular, DeepRobust-W and DeepRobust-B are able to achieve an F1 score of up to 91.4% and 99.1%, respectively. We further show that DeepRobust-W can be applied to a regression problem in another domain. Our evaluation on three self-driving car models demonstrates that DeepRobust-W is effective in identifying points of poor robustness with F1 score up to 78.9%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源