论文标题

PAC学习进行战略分类

PAC-Learning for Strategic Classification

论文作者

Sundaram, Ravi, Vullikanti, Anil, Xu, Haifeng, Yao, Fan

论文摘要

对测试数据的战略或对抗性操纵以欺骗分类器的研究引起了最近的关注。以前的大多数作品都集中在两个极端情况下,任何测试数据点都是完全对抗性的,或者总是同样喜欢正标。在本文中,我们通过战略分类的统一框架概括了这两种情况,并介绍了战略VC-Dimension(SVC)的概念,以在我们的一般战略设置中捕获PAC可校准。 SVC可证明Cullina等人引入的近期对抗VC-Dimension(AVC)的最新概念。 Arxiv:1806.01471。我们将基本战略线性分类问题的框架实例化。我们完全表征:(1)线性分类器的统计学可学习性通过固定其SVC; (2)通过固定经验风险最小化问题的复杂性来固定其计算障碍。有趣的是,线性分类器的SVC始终受到其标准VC维度的上限。该表征还严格概括了ARXIV中线性分类器的AVC绑定:1806.01471。

The study of strategic or adversarial manipulation of testing data to fool a classifier has attracted much recent attention. Most previous works have focused on two extreme situations where any testing data point either is completely adversarial or always equally prefers the positive label. In this paper, we generalize both of these through a unified framework for strategic classification, and introduce the notion of strategic VC-dimension (SVC) to capture the PAC-learnability in our general strategic setup. SVC provably generalizes the recent concept of adversarial VC-dimension (AVC) introduced by Cullina et al. arXiv:1806.01471. We instantiate our framework for the fundamental strategic linear classification problem. We fully characterize: (1) the statistical learnability of linear classifiers by pinning down its SVC; (2) its computational tractability by pinning down the complexity of the empirical risk minimization problem. Interestingly, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization also strictly generalizes the AVC bound for linear classifiers in arXiv:1806.01471.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源