论文标题
转换:基于统一变压器的原样异常检测框架
TransLog: A Unified Transformer-based Framework for Log Anomaly Detection
论文作者
论文摘要
对数异常检测是IT操作(AIOPS)领域中人工智能领域的关键组成部分。考虑到变异域的日志数据,在实际工业场景中,尤其是对于低资源域而言,对未知域的整个网络进行重新效力是低效的。但是,以前的深层模型仅着重于在同一域中提取对数序列的语义,从而导致对多域对数的概括不佳。因此,我们提出了一个基于统一的变压器对数异常检测的框架(\ oureMethod {}),该框架由基于训练和适配器的调谐阶段组成。我们的模型首先是在源域上鉴定的,以获得对数数据的共享语义知识。然后,我们通过基于适配器的调整将验证的模型转移到目标域。提出的方法在三个公共数据集上进行评估,包括一个源域和两个目标域。实验结果表明,我们简单而有效的方法,目标域的可训练参数较少,较低的培训成本较低,可以在三个基准上实现最先进的表现。
Log anomaly detection is a key component in the field of artificial intelligence for IT operations (AIOps). Considering log data of variant domains, retraining the whole network for unknown domains is inefficient in real industrial scenarios especially for low-resource domains. However, previous deep models merely focused on extracting the semantics of log sequence in the same domain, leading to poor generalization on multi-domain logs. Therefore, we propose a unified Transformer-based framework for log anomaly detection (\ourmethod{}), which is comprised of the pretraining and adapter-based tuning stage. Our model is first pretrained on the source domain to obtain shared semantic knowledge of log data. Then, we transfer the pretrained model to the target domain via the adapter-based tuning. The proposed method is evaluated on three public datasets including one source domain and two target domains. The experimental results demonstrate that our simple yet efficient approach, with fewer trainable parameters and lower training costs in the target domain, achieves state-of-the-art performance on three benchmarks.