论文标题

R2-AD2:通过分析原始梯度来检测异常

R2-AD2: Detecting Anomalies by Analysing the Raw Gradient

论文作者

Schulze, Jan-Philipp, Sperl, Philip, Răduţoiu, Ana, Sagebiel, Carla, Böttinger, Konstantin

论文摘要

神经网络遵循一个基于梯度的学习方案,通过反向传播输出损失来调整其映射参数。与训练期间看到的样本不同,导致不同的梯度分布。基于这种直觉,我们设计了一种新型的半监督异常检测方法R2-AD2。通过分析梯度在多个训练步骤中的时间分布,我们可靠地检测到严格的半监督设置中的点异常。我们将测试样本引起的原始梯度输入到端到端的复发性神经网络体系结构,而不是依赖于域的功能。 R2-AD2以纯粹数据驱动的方式工作,因此很容易适用于各种重要的异常检测用例。

Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss. Samples unlike the ones seen during training cause a different gradient distribution. Based on this intuition, we design a novel semi-supervised anomaly detection method called R2-AD2. By analysing the temporal distribution of the gradient over multiple training steps, we reliably detect point anomalies in strict semi-supervised settings. Instead of domain dependent features, we input the raw gradient caused by the sample under test to an end-to-end recurrent neural network architecture. R2-AD2 works in a purely data-driven way, thus is readily applicable in a variety of important use cases of anomaly detection.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源