论文标题
单调学习
Monotone Learning
论文作者
论文摘要
训练数据的量是决定学习算法的概括能力的关键因素之一。直觉上,人们期望随着训练数据的增加,错误率将降低。也许令人惊讶的是,自然而然地尝试这种直觉的尝试引起了有趣且具有挑战性的数学问题。例如,在他们关于模式识别的古典书籍中,Devroye,Gyorfi和Lugosi(1996)询问是否存在{单调}贝叶斯一致的算法。这个问题一直开放25年,直到最近Pestov(2021)使用单调贝叶斯一致算法的复杂构造解决了该问题进行二进制分类。 我们得出了多类分类的一般结果,表明每个学习算法A都可以转换为具有相似性能的单调。此外,转换是有效的,仅使用黑盒甲骨文访问A。 我们的转换很容易意味着在多种情况下单调学习者:例如,它将Pestov的结果扩展到具有任意数量的标签的分类任务。这与针对二进制分类量身定制的Pestov的工作形成鲜明对比。 另外,我们在单调算法的误差上提供统一的边界。这使我们的转换适用于无分销设置。例如,在PAC学习中,这意味着每个可学习的班级都接受单调PAC学习者。这通过Viering,Mey和Loog(2019)解决了问题; Viering and Loog(2021); Mhammedi(2021)。
The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. Intuitively, one expects the error rate to decrease as the amount of training-data increases. Perhaps surprisingly, natural attempts to formalize this intuition give rise to interesting and challenging mathematical questions. For example, in their classical book on pattern recognition, Devroye, Gyorfi, and Lugosi (1996) ask whether there exists a {monotone} Bayes-consistent algorithm. This question remained open for over 25 years, until recently Pestov (2021) resolved it for binary classification, using an intricate construction of a monotone Bayes-consistent algorithm. We derive a general result in multiclass classification, showing that every learning algorithm A can be transformed to a monotone one with similar performance. Further, the transformation is efficient and only uses a black-box oracle access to A. This demonstrates that one can provably avoid non-monotonic behaviour without compromising performance, thus answering questions asked by Devroye et al (1996), Viering, Mey, and Loog (2019), Viering and Loog (2021), and by Mhammedi (2021). Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels. This is in contrast with Pestov's work which is tailored to binary classification. In addition, we provide uniform bounds on the error of the monotone algorithm. This makes our transformation applicable in distribution-free settings. For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021).