论文标题
对于非平滑凸优化的梯度下降,无量子加速
No quantum speedup over gradient descent for non-smooth convex optimization
论文作者
论文摘要
我们研究了一阶凸优化问题,其中我们对(不一定是平滑)函数的黑框访问$ f:\ mathbb {r}^n \ to \ mathbb {r} $及其(sub)梯度。我们的目标是找到$ε$ - 最低$ f $的$ f $,从与真实最小值最多$ r $的点开始。如果$ f $是$ g $ -lipschitz,则使用$ o((gr/ε)^{2})$查询解决这个问题。重要的是,查询的数量独立于尺寸$ n $,而梯度下降在这方面是最佳的:没有确定性或随机算法可以实现更好的复杂性,它仍然独立于尺寸$ n $。 在本文中,我们使用比以前的下限更简单的参数来谴责$ω((gr/ε)^{2})$的随机下限。然后,我们表明,尽管在下边界中使用的功能系列对于随机算法很难,但可以使用$ o(gr/ε)$量子查询来求解。然后,我们使用一组不同的实例显示了对量子算法的改进的下限,并确定我们的主要结果,即使总的来说,即使是量子算法也需要$ω(((gr/ε)^2)$查询来解决该问题。因此,对于黑盒一阶凸优化,梯度下降上没有量子加速,而没有对功能系列的进一步假设。
We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function $f:\mathbb{R}^n \to \mathbb{R}$ and its (sub)gradient. Our goal is to find an $ε$-approximate minimum of $f$ starting from a point that is distance at most $R$ from the true minimum. If $f$ is $G$-Lipschitz, then the classic gradient descent algorithm solves this problem with $O((GR/ε)^{2})$ queries. Importantly, the number of queries is independent of the dimension $n$ and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension $n$. In this paper we reprove the randomized lower bound of $Ω((GR/ε)^{2})$ using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using $O(GR/ε)$ quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need $Ω((GR/ε)^2)$ queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.