论文标题
一种时间序列方法,可解释神经网,并应用于风险管理和欺诈检测
A Time Series Approach to Explainability for Neural Nets with Applications to Risk-Management and Fraud Detection
论文作者
论文摘要
人工智能正在创造跨技术驱动的应用程序领域的最大革命之一。对于金融领域而言,它为大量市场创新提供了许多机会,但广泛采用AI系统在很大程度上依赖于我们对其产出的信任。通过了解预测背后的理由来实现对技术的信任。为此,出现了可解释的AI的概念,引入了一套试图向用户解释复杂模型如何达到某个决定的技术。对于横截面数据,经典XAI方法可以导致对模型内部运作的宝贵见解,但是在存在依赖性结构和非平台性的情况下,这些技术通常不能很好地应对纵向数据(时间序列)。我们在这里提出了一种新型的XAI技术,用于深度学习方法,该技术可以保留和利用数据的自然时间顺序。
Artificial intelligence is creating one of the biggest revolution across technology driven application fields. For the finance sector, it offers many opportunities for significant market innovation and yet broad adoption of AI systems heavily relies on our trust in their outputs. Trust in technology is enabled by understanding the rationale behind the predictions made. To this end, the concept of eXplainable AI emerged introducing a suite of techniques attempting to explain to users how complex models arrived at a certain decision. For cross-sectional data classical XAI approaches can lead to valuable insights about the models' inner workings, but these techniques generally cannot cope well with longitudinal data (time series) in the presence of dependence structure and non-stationarity. We here propose a novel XAI technique for deep learning methods which preserves and exploits the natural time ordering of the data.