论文标题
SAIBERSOC:基准测试并评估安全操作中心的性能的合成攻击注射
SAIBERSOC: Synthetic Attack Injection to Benchmark and Evaluate the Performance of Security Operation Centers
论文作者
论文摘要
在本文中,我们介绍了Saibersoc,Saibersoc是一种工具和方法,使安全研究人员和操作员能够评估已部署和运营安全操作中心(SOCS)(或任何其他安全监控基础架构)的性能。该方法依赖于MITER ATT&CK框架来定义一种在操作SOC中生成和自动注入合成攻击的程序,以评估任何感兴趣的输出度量指标(例如,检测准确性,进行时间进行预测等)。为了评估拟议方法论的有效性,我们设计了一个实验,$ n = 124 $学生扮演SOC分析师的角色。该实验依赖于真正的SOC基础设施,并将学生分配给BADSOC或GOODSOC实验条件。我们的结果表明,所提出的方法可有效识别由SOC配置(最小)变化引起的SOC性能变化。我们将SAIBERSOC工具实施释放为免费和开源软件。
In this paper we introduce SAIBERSOC, a tool and methodology enabling security researchers and operators to evaluate the performance of deployed and operational Security Operation Centers (SOCs) (or any other security monitoring infrastructure). The methodology relies on the MITRE ATT&CK Framework to define a procedure to generate and automatically inject synthetic attacks in an operational SOC to evaluate any output metric of interest (e.g., detection accuracy, time-to-investigation, etc.). To evaluate the effectiveness of the proposed methodology, we devise an experiment with $n=124$ students playing the role of SOC analysts. The experiment relies on a real SOC infrastructure and assigns students to either a BADSOC or a GOODSOC experimental condition. Our results show that the proposed methodology is effective in identifying variations in SOC performance caused by (minimal) changes in SOC configuration. We release the SAIBERSOC tool implementation as free and open source software.