论文标题
过度拟合量子机器学习和纠缠辍学
Overfitting in quantum machine learning and entangling dropout
论文作者
论文摘要
机器学习的最终目标是构建一个基于给定的培训数据集的模型功能,该模型功能具有对看不见的数据集的概括能力。如果模型函数具有太多的表达能力,则可能会过分地符合训练数据,因此失去了概括能力。为了避免这种过度拟合的问题,在经典的机器学习制度中已经开发了几种技术,而辍学就是一种有效的方法。本文提出了该技术在量子机学习方面的直接类似物,即纠缠的辍学,这意味着在训练过程中,在给定参数化的量子电路中的某些纠缠门被随机删除,以降低电路的表现性。给出了一些简单的案例研究,以表明该技术实际上抑制了过度拟合。
The ultimate goal in machine learning is to construct a model function that has a generalization capability for unseen dataset, based on given training dataset. If the model function has too much expressibility power, then it may overfit to the training data and as a result lose the generalization capability. To avoid such overfitting issue, several techniques have been developed in the classical machine learning regime, and the dropout is one such effective method. This paper proposes a straightforward analogue of this technique in the quantum machine learning regime, the entangling dropout, meaning that some entangling gates in a given parametrized quantum circuit are randomly removed during the training process to reduce the expressibility of the circuit. Some simple case studies are given to show that this technique actually suppresses the overfitting.