论文标题
量化量的近似速度与未量化的Relu神经网络及以后
Approximation speed of quantized vs. unquantized ReLU neural networks and beyond
论文作者
论文摘要
我们处理有关Relu网络近似属性的两个补充问题。首先,我们研究具有实价权重的Relu网络的统一量化如何影响其近似特性。我们对均匀量化的relu网络所需的每个坐标数量的最小数量建立了上限,以保持与未量化的渐近渐近近似速度相同的多项式渐近近似速度。我们还表征了Relu网络最接近的近纽布均匀量化的误差。这是使用地图的Lipschitz常数上的新的下限来实现的,该常数将Relu网络的参数与它们的实现相关联,并具有上限的经典结果。其次,我们研究何时可以预期的是,是否可以比其他经典近似家庭具有更好的近似属性。实际上,几个近似家族具有以下共同限制:它们的多项式渐近近似速度都通过该集合的编码速度从上方界定。我们引入了一种近似家族的新抽象属性,称为无限可调性,这意味着这一上限。许多用字典或relu网络定义的许多经典近似族被证明是无限代码的。这统一并概括了该上限的几个情况。
We deal with two complementary questions about approximation properties of ReLU networks. First, we study how the uniform quantization of ReLU networks with real-valued weights impacts their approximation properties. We establish an upper-bound on the minimal number of bits per coordinate needed for uniformly quantized ReLU networks to keep the same polynomial asymptotic approximation speeds as unquantized ones. We also characterize the error of nearest-neighbour uniform quantization of ReLU networks. This is achieved using a new lower-bound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upper-bound generalizing classical results. Second, we investigate when ReLU networks can be expected, or not, to have better approximation properties than other classical approximation families. Indeed, several approximation families share the following common limitation: their polynomial asymptotic approximation speed of any set is bounded from above by the encoding speed of this set. We introduce a new abstract property of approximation families, called infinite-encodability, which implies this upper-bound. Many classical approximation families, defined with dictionaries or ReLU networks, are shown to be infinite-encodable. This unifies and generalizes several situations where this upper-bound is known.