论文标题
预测最新神经网络质量的趋势,而无需访问培训或测试数据
Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data
论文作者
论文摘要
在许多应用程序中,一个与其他人培训的神经网络模型一起工作。对于此类预处理的模型,可能无法访问培训数据或测试数据。此外,人们可能不知道有关该模型的详细信息,例如,训练数据的细节,损失函数,超参数值等。给定一个或多个预据的模型,要对模型的预期性能或质量说出任何挑战。在这里,我们通过提供数百种公开验证的模型的详细荟萃分析来应对这一挑战。我们检查了基于规范的容量控制指标以及最近开发的重型自我正则化理论的基于幂律的指标。我们发现,基于规范的指标与训练有素的模型的测试精度有很好的相关性,但是它们通常无法区分训练有素的模型与训练良好的模型。我们还发现,基于权力法的指标可以做得更好 - 在区分具有给定架构的一系列训练有素的模型之间,从定量上讲更好;在区分训练良好的模型与训练良好的模型方面,质量上更好。这些方法可用于识别何时经过验证的神经网络存在无法通过检查训练/测试精确度来检测到的问题。
In many applications, one works with neural network models trained by someone else. For such pretrained models, one may not have access to training data or test data. Moreover, one may not know details about the model, e.g., the specifics of the training data, the loss function, the hyperparameter values, etc. Given one or many pretrained models, it is a challenge to say anything about the expected performance or quality of the models. Here, we address this challenge by providing a detailed meta-analysis of hundreds of publicly-available pretrained models. We examine norm based capacity control metrics as well as power law based metrics from the recently-developed Theory of Heavy-Tailed Self Regularization. We find that norm based metrics correlate well with reported test accuracies for well-trained models, but that they often cannot distinguish well-trained versus poorly-trained models. We also find that power law based metrics can do much better -- quantitatively better at discriminating among series of well-trained models with a given architecture; and qualitatively better at discriminating well-trained versus poorly-trained models. These methods can be used to identify when a pretrained neural network has problems that cannot be detected simply by examining training/test accuracies.