论文标题

安全的机器学习模型从受信任的研究环境中释放:SACRO-ML软件包

Safe machine learning model release from Trusted Research Environments: The SACRO-ML package

论文作者

Smith, Jim, Preen, Richard J., McCarthy, Andrew, Albashir, Maha, Crespi-Boixader, Alba, Mumtaz, Shahzad, Cole, Christian, Liley, James, Migenda, Jost, Rogers, Simon, Jones, Yola

论文摘要

我们提出了Sacro-ML,这是一个综合的开源Python工具套件,可促进机器学习(ML)模型的统计披露控制(SDC),该模型在公开发布前接受了机密数据的培训。 SACRO-ML结合了(i)SafeModel软件包,该软件包通过评估培训制度构成的披露的脆弱性来扩展常用的ML模型来提供Ante-Hoc SDC; (ii)攻击软件包,通过严格评估模型在训练后通过各种模拟攻击来评估模型的经验披露风险,从而提供事后SDC。 SACRO-ML代码和文档可在https://github.com/ai-sdc/sacro-ml的情况下获得MIT许可证。

We present SACRO-ML, an integrated suite of open source Python tools to facilitate the statistical disclosure control (SDC) of machine learning (ML) models trained on confidential data prior to public release. SACRO-ML combines (i) a SafeModel package that extends commonly used ML models to provide ante-hoc SDC by assessing the vulnerability of disclosure posed by the training regime; and (ii) an Attacks package that provides post-hoc SDC by rigorously assessing the empirical disclosure risk of a model through a variety of simulated attacks after training. The SACRO-ML code and documentation are available under an MIT license at https://github.com/AI-SDC/SACRO-ML

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源