论文标题
尊重的时间序列属性使深度时间序列预测完美
Respecting Time Series Properties Makes Deep Time Series Forecasting Perfect
论文作者
论文摘要
如何处理时间功能应成为任何时间序列预测模型的核心问题。具有讽刺意味的是,基于深度学习的模型,即使是最先进的基线,通常会忽略或误解它。这种行为使他们效率低下,站不住脚和不稳定。在本文中,我们严格分析了从时间序列属性的角度来看,三个普遍但不足/毫无根据的深度时间序列预测机制或方法,包括归一化方法,多变量预测和输入序列长度。相应的推论和溶液在经验和理论基础上均给出。因此,我们提出了一个新颖的时间序列预测网络,即rtnet,基于上述分析。它足够一般,可以与受监督和自我监督的预测格式结合在一起。得益于尊重时间序列属性的核心思想,无论哪种预测格式,RTNET显然显示出卓越的预测性能,而其他数十个SOTA时间序列序列序列预测基线在三个实际基准数据集中的基准。总的来说,它甚至占用时间复杂性和记忆力较小,同时获得更好的预测准确性。源代码可在https://github.com/origamisl/rtnet上找到。
How to handle time features shall be the core question of any time series forecasting model. Ironically, it is often ignored or misunderstood by deep-learning based models, even those baselines which are state-of-the-art. This behavior makes their inefficient, untenable and unstable. In this paper, we rigorously analyze three prevalent but deficient/unfounded deep time series forecasting mechanisms or methods from the view of time series properties, including normalization methods, multivariate forecasting and input sequence length. Corresponding corollaries and solutions are given on both empirical and theoretical basis. We thereby propose a novel time series forecasting network, i.e. RTNet, on the basis of aforementioned analysis. It is general enough to be combined with both supervised and self-supervised forecasting format. Thanks to the core idea of respecting time series properties, no matter in which forecasting format, RTNet shows obviously superior forecasting performances compared with dozens of other SOTA time series forecasting baselines in three real-world benchmark datasets. By and large, it even occupies less time complexity and memory usage while acquiring better forecasting accuracy. The source code is available at https://github.com/OrigamiSL/RTNet.