论文标题

减少对抗性的学习

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

论文作者

Montasser, Omar, Hanneke, Steve, Srebro, Nathan

论文摘要

我们研究了将对抗性鲁棒性学习减少到标准PAC学习的问题,即仅使用黑框非稳定学习者访问的对手稳健的预测指标的复杂性。我们提供的减少可以使用任何非固定学习者$ \ Mathcal {a} $ for $ \ MATHCAL {C} $使用任何非固定学习者$ \ Mathcal {a} $。 $ \ Mathcal {a} $的调用数取决于对数取决于允许的对抗扰动的数量,并且我们给出一个下限,显示这是不可避免的。

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class $\mathcal{C}$ using any non-robust learner $\mathcal{A}$ for $\mathcal{C}$. The number of calls to $\mathcal{A}$ depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源