论文标题
有效的结构支撑张量训练机
Efficient Structure-preserving Support Tensor Train Machine
论文作者
论文摘要
越来越多的收集数据是高维的多路阵列(张量),对于有效的学习算法至关重要,可以尽可能利用这种张力结构。高维数据的维度诅咒和矢量化数据时的结构丢失促进了使用定制的低级张量分类方法的使用。在存在少量培训数据的情况下,内核方法为非线性决策边界提供了可能性。我们开发了张量列车多路多级内核(TT-MMK),该内核(TT-MMK)结合了规范多核分解的简单性,双结构传播支持向量机的分类功率以及张量训练列车(TT)近似值的可靠性。我们通过实验表明,TT-MMK方法通常在计算上更可靠,对调整参数敏感,并且在针对其他最新技术技术进行基准测试时,在SVM分类中给出了更高的预测准确性。
An increasing amount of collected data are high-dimensional multi-way arrays (tensors), and it is crucial for efficient learning algorithms to exploit this tensorial structure as much as possible. The ever-present curse of dimensionality for high dimensional data and the loss of structure when vectorizing the data motivates the use of tailored low-rank tensor classification methods. In the presence of small amounts of training data, kernel methods offer an attractive choice as they provide the possibility for a nonlinear decision boundary. We develop the Tensor Train Multi-way Multi-level Kernel (TT-MMK), which combines the simplicity of the Canonical Polyadic decomposition, the classification power of the Dual Structure-preserving Support Vector Machine, and the reliability of the Tensor Train (TT) approximation. We show by experiments that the TT-MMK method is usually more reliable computationally, less sensitive to tuning parameters, and gives higher prediction accuracy in the SVM classification when benchmarked against other state-of-the-art techniques.