论文标题

多元时间序列回归的深度学习中的对抗示例

Adversarial Examples in Deep Learning for Multivariate Time Series Regression

论文作者

Mode, Gautam Raj, Hoque, Khaza Anuarul

论文摘要

多变量时间序列(MTS)回归任务在许多现实世界中的数据挖掘应用程序中很常见,包括金融,网络安全,能源,医疗保健,预后和许多其他数据。由于深度学习(DL)算法在包括图像识别和计算机视觉(计算机视野)中取得了巨大成功,研究人员开始采用这些技术来解决MTS数据挖掘问题,其中许多问题针对安全至关重要和至关重要的应用。不幸的是,DL算法以其对对抗性示例的敏感性而闻名,这也使MTS预测的DL回归模型也容易受到这些攻击的影响。据我们所知,以前没有工作探讨了DL MTS回归模型对对抗性时间序列示例的脆弱性,这是一个重要的步骤,特别是当此类模型的预测用于安全至关重要和至关重要的问题时。在这项工作中,我们利用了三种最先进的深度学习回归模型,特别是卷积神经网络(CNN),长期短期记忆(LSTM)和门控的经常性单元(GRU)(GRU)的三种最先进的深度学习回归模型,特别是卷积神经网络(CNN),尤其是卷积的神经网络(CNN),利用图像分类域和工艺对抗性多元时间序列示例。我们使用Google库存和家庭功耗数据集评估我们的研究。获得的结果表明,所有评估的DL回归模型都容易受到对抗性攻击的影响,因此可以导致关键性和关键性领域(例如能源和融资)的灾难性后果。

Multivariate time series (MTS) regression tasks are common in many real-world data mining applications including finance, cybersecurity, energy, healthcare, prognostics, and many others. Due to the tremendous success of deep learning (DL) algorithms in various domains including image recognition and computer vision, researchers started adopting these techniques for solving MTS data mining problems, many of which are targeted for safety-critical and cost-critical applications. Unfortunately, DL algorithms are known for their susceptibility to adversarial examples which also makes the DL regression models for MTS forecasting also vulnerable to those attacks. To the best of our knowledge, no previous work has explored the vulnerability of DL MTS regression models to adversarial time series examples, which is an important step, specifically when the forecasting from such models is used in safety-critical and cost-critical applications. In this work, we leverage existing adversarial attack generation techniques from the image classification domain and craft adversarial multivariate time series examples for three state-of-the-art deep learning regression models, specifically Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). We evaluate our study using Google stock and household power consumption dataset. The obtained results show that all the evaluated DL regression models are vulnerable to adversarial attacks, transferable, and thus can lead to catastrophic consequences in safety-critical and cost-critical domains, such as energy and finance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源