论文标题
在某些张量火车格式的正交方案上
On some orthogonalization schemes in Tensor Train format
论文作者
论文摘要
在张量空间的框架中,我们考虑正交内核从一组线性独立的张量产生张量子空间的正交基础。特别是,我们在实验上研究了六种正交方法的正交性丧失,即具有(CGS2,MGS2)的经典和修改的革兰氏 - schmidt,没有(CGS2,MGS2),没有(CGS,MGS)重新构化,革兰氏症方法,革兰氏型方法和家居者转换。为了克服维数的诅咒,我们使用张量列(TT)形式主义代表具有低级别近似值的张量。此外,我们以规定的准确性通过TT-ROUNT的方法在标准算法大纲中介绍了重压步骤。在描述了算法的结构和特性之后,我们通过数值实验说明了它们正交性的丧失。几十年来获得的经典矩阵计算圆形分析的理论界限似乎维护了,而单位圆形往返被TT连接的精度代替。每个正交内核的计算分析在内存需求和计算复杂性方面是根据TT - 连杆数量衡量的计算复杂性,这恰好是计算最昂贵的操作,这是完成研究的。
In the framework of tensor spaces, we consider orthogonalization kernels to generate an orthogonal basis of a tensor subspace from a set of linearly independent tensors. In particular, we experimentally study the loss of orthogonality of six orthogonalization methods, namely Classical and Modified Gram-Schmidt with (CGS2, MGS2) and without (CGS, MGS) re-orthogonalization, the Gram approach, and the Householder transformation. To overcome the curse of dimensionality, we represent tensors with a low-rank approximation using the Tensor Train (TT) formalism. In addition, we introduce recompression steps in the standard algorithm outline through the TT-rounding method at a prescribed accuracy. After describing the structure and properties of the algorithms, we illustrate their loss of orthogonality with numerical experiments. The theoretical bounds from the classical matrix computation round-off analysis, obtained over several decades, seem to be maintained, with the unit round-off replaced by the TT-rounding accuracy. The computational analysis for each orthogonalization kernel in terms of the memory requirements and the computational complexity measured as a function of the number of TT-rounding, which happens to be the most computationally expensive operation, completes the study.