论文标题

接近线性的时间和固定参数可用于张量分解的算法

Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor Decompositions

论文作者

Mahankali, Arvind V., Woodruff, David P., Zhang, Ziyu

论文摘要

我们研究了张量的低等级近似值,重点是张量列和塔克分解,以及带有树张量网络和更通用张量的网络的近似值。对于张量火车的分解,我们给出了双尺度$(1 + \ eps)$ - 近似算法,具有较小的双质体等级和$ O(q \ cdot \ nnz(a))$运行时间,最多可较低订单术语,这比附加误差算法的algorithm的\ cite \ cite cite {huberized {huberized {huberized {huberized {huberized {huberized {huberied {huberied {huberied {huberized {huberized {huberized {huberied {huberied {我们还展示了如何将\ cite {huber2017randomized}的算法转换为相对错误算法,但是它们的算法必须具有$ o的运行时间为$ O(qr^2 \ cdot \ nnz(a)) + n \ cdot \ cdot \ cdot \ cd poly(qk/\ eps)$(qk/\ eps $)具有两批位$ r $。据我们所知,我们的工作是第一个实现张量列车分解的多项式时间误差近似值的工作。我们的关键技术是一种用于获取具有$ Q $ $ Q $行的子空间嵌入的方法,该矩阵是矩阵的张力,这是张张量$ q $张量的张量。我们将算法扩展到树张量网络。此外,我们通过使用\ cite {ms08_simulating_quantum_quantum_tensor_contraction}的结果将算法扩展到具有任意图的张量网络(我们称为一般张量网络),并显示级别的一般tensor网络可以与等级$ k $ k $ k $ k $ k^$ k^$ k^^(c)签约。允许我们减少树木张量网络的情况。最后,我们为张量列,塔克和CP分解提供了新的固定参数可拖动算法,它们比\ cite {swz19_tensor_low_rank}更简单,因为它们不使用多项式系统溶液。我们的高斯子空间嵌入技术完全具有$ k $行(因此,成功的概率是指数的)可能具有独立的兴趣。

We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions, as well as approximations with tree tensor networks and more general tensor networks. For tensor train decomposition, we give a bicriteria $(1 + \eps)$-approximation algorithm with a small bicriteria rank and $O(q \cdot \nnz(A))$ running time, up to lower order terms, which improves over the additive error algorithm of \cite{huber2017randomized}. We also show how to convert the algorithm of \cite{huber2017randomized} into a relative error algorithm, but their algorithm necessarily has a running time of $O(qr^2 \cdot \nnz(A)) + n \cdot \poly(qk/\eps)$ when converted to a $(1 + \eps)$-approximation algorithm with bicriteria rank $r$. To the best of our knowledge, our work is the first to achieve polynomial time relative error approximation for tensor train decomposition. Our key technique is a method for obtaining subspace embeddings with a number of rows polynomial in $q$ for a matrix which is the flattening of a tensor train of $q$ tensors. We extend our algorithm to tree tensor networks. In addition, we extend our algorithm to tensor networks with arbitrary graphs (which we refer to as general tensor networks), by using a result of \cite{ms08_simulating_quantum_tensor_contraction} and showing that a general tensor network of rank $k$ can be contracted to a binary tree network of rank $k^{O(°(G)\tw(G))}$, allowing us to reduce to the case of tree tensor networks. Finally, we give new fixed-parameter tractable algorithms for the tensor train, Tucker, and CP decompositions, which are simpler than those of \cite{swz19_tensor_low_rank} since they do not make use of polynomial system solvers. Our technique of Gaussian subspace embeddings with exactly $k$ rows (and thus exponentially small success probability) may be of independent interest.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源