论文标题
Seqlink:用于建模部分观察到时间序列的强大神经码结构
SeqLink: A Robust Neural-ODE Architecture for Modelling Partially Observed Time Series
论文作者
论文摘要
基于普通的微分方程(ODE)模型已成为解决许多时间序列问题的基础模型。将神经OD与传统的RNN模型相结合为不规则时间序列提供了最佳代表。但是,基于ODE的模型通常需要根据初始观察值或最新观察结果来定义隐藏状态的轨迹,从而在处理更长的序列和延长的时间间隔时提出了有关其有效性的问题。在本文中,我们在时间序列数据的背景下以不同程度的稀疏性探讨了ODE模型的行为。我们介绍了Seqlink,这是一种创新的神经结构,旨在增强序列表示的鲁棒性。与仅依赖于从最后观察到的值生成的隐藏状态的传统方法不同,Seqlink利用了ode潜在表示从多个数据示例得出,使其能够生成可靠的数据表示,而不管序列长度或数据稀疏级别。我们的模型背后的核心概念是基于样本之间的关系(序列之间的链接)的隐藏状态定义。通过对部分观察到的合成和现实世界数据集进行的广泛实验,我们证明了Seqlink改善了间歇时间序列的建模,从而始终优于最先进的方法。
Ordinary Differential Equations (ODE) based models have become popular as foundation models for solving many time series problems. Combining neural ODEs with traditional RNN models has provided the best representation for irregular time series. However, ODE-based models typically require the trajectory of hidden states to be defined based on either the initial observed value or the most recent observation, raising questions about their effectiveness when dealing with longer sequences and extended time intervals. In this article, we explore the behaviour of the ODE models in the context of time series data with varying degrees of sparsity. We introduce SeqLink, an innovative neural architecture designed to enhance the robustness of sequence representation. Unlike traditional approaches that solely rely on the hidden state generated from the last observed value, SeqLink leverages ODE latent representations derived from multiple data samples, enabling it to generate robust data representations regardless of sequence length or data sparsity level. The core concept behind our model is the definition of hidden states for the unobserved values based on the relationships between samples (links between sequences). Through extensive experiments on partially observed synthetic and real-world datasets, we demonstrate that SeqLink improves the modelling of intermittent time series, consistently outperforming state-of-the-art approaches.