论文标题
神经序列学习中的可解释量子优势
Interpretable Quantum Advantage in Neural Sequence Learning
论文作者
论文摘要
近年来,量子神经网络已经广泛研究,鉴于它们的潜在实用性以及有关其有效表达某些经典数据的能力的最新结果。但是,迄今为止的分析结果依赖于复杂性理论的假设和论点。因此,关于量子神经网络的表达能力的来源或经典数据类别的表达能力的来源几乎没有直觉。在这里,我们研究了一类广泛的神经网络序列模型与基于高斯操作的一类复发模型之间的相对表达能力。我们明确地表明,量子上下文性是两个模型类表达中无条件内存分离的来源。此外,由于我们能够将量子上下文视为这种分离的来源,因此我们使用此直觉来研究我们引入模型在具有语言上下文性的标准翻译数据集上的相对性能。通过这样做,我们证明了我们引入的量子模型即使在实践中也能够超越最先进的古典模型。
Quantum neural networks have been widely studied in recent years, given their potential practical utility and recent results regarding their ability to efficiently express certain classical data. However, analytic results to date rely on assumptions and arguments from complexity theory. Due to this, there is little intuition as to the source of the expressive power of quantum neural networks or for which classes of classical data any advantage can be reasonably expected to hold. Here, we study the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements. We explicitly show that quantum contextuality is the source of an unconditional memory separation in the expressivity of the two model classes. Additionally, as we are able to pinpoint quantum contextuality as the source of this separation, we use this intuition to study the relative performance of our introduced model on a standard translation data set exhibiting linguistic contextuality. In doing so, we demonstrate that our introduced quantum models are able to outperform state of the art classical models even in practice.