论文标题

支持向量机训练的关系梯度下降算法

A Relational Gradient Descent Algorithm For Support Vector Machine Training

论文作者

Abo-Khamis, Mahmoud, Im, Sungjin, Moseley, Benjamin, Pruhs, Kirk, Samadian, Alireza

论文摘要

当数据处于关系形式时,我们考虑梯度下降如支持向量机(SVM)训练的算法。 SVM物镜的梯度无法通过已知技术有效计算,因为它遭受了``减法问题''的影响。我们首先表明,无法通过表明计算SVM目标函数梯度的任何恒定近似值是$ \#p $ -HARD,即使对于acyclic Joins也是如此。但是,我们通过将注意力的注意力限制在稳定实例上来避免减法问题,如果点略微扰动,则直觉上是几乎最佳的解决方案几乎保持最佳状态。我们给出了一种有效的算法,该算法计算``伪级奖励'',该算法可以保证以与使用实际梯度实现的速率相当的速率稳定实例收敛。我们认为,我们的结果表明,这种稳定性在设计算法的背景下,分析可能会产生有用的见解,以解决产生减法问题的其他学习问题的关系数据。

We consider gradient descent like algorithms for Support Vector Machine (SVM) training when the data is in relational form. The gradient of the SVM objective can not be efficiently computed by known techniques as it suffers from the ``subtraction problem''. We first show that the subtraction problem can not be surmounted by showing that computing any constant approximation of the gradient of the SVM objective function is $\#P$-hard, even for acyclic joins. We, however, circumvent the subtraction problem by restricting our attention to stable instances, which intuitively are instances where a nearly optimal solution remains nearly optimal if the points are perturbed slightly. We give an efficient algorithm that computes a ``pseudo-gradient'' that guarantees convergence for stable instances at a rate comparable to that achieved by using the actual gradient. We believe that our results suggest that this sort of stability the analysis would likely yield useful insight in the context of designing algorithms on relational data for other learning problems in which the subtraction problem arises.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源