论文标题
是什么杀死了凸的助推器?
What killed the Convex Booster ?
论文作者
论文摘要
Long and Ferveio的具有里程碑意义的负面结果确立了监督的三重奏(损失,算法,模型)的最严重的巨大失败,以此为其高精度机制而受到称赞。数百篇论文跟进了两个可疑的罪魁祸首:损失(是凸)和/或算法(用于安装经典的增强蓝图)。在这里,我们呼吁半个世纪+建立阶级概率估计的损失理论(适当性),long and foreio结果的扩展以及一种新的一般促进算法,以证明在其特定上下文中的实际罪魁祸首实际上是(线性)模型类。当我们认为负面结果的来源位于ML的普遍性(或其他珍贵)的阴暗面时,我们提倡更一般的stanpoint。
A landmark negative result of Long and Servedio established a worst-case spectacular failure of a supervised learning trio (loss, algorithm, model) otherwise praised for its high precision machinery. Hundreds of papers followed up on the two suspected culprits: the loss (for being convex) and/or the algorithm (for fitting a classical boosting blueprint). Here, we call to the half-century+ founding theory of losses for class probability estimation (properness), an extension of Long and Servedio's results and a new general boosting algorithm to demonstrate that the real culprit in their specific context was in fact the (linear) model class. We advocate for a more general stanpoint on the problem as we argue that the source of the negative result lies in the dark side of a pervasive -- and otherwise prized -- aspect of ML: \textit{parameterisation}.