论文标题
神经符号AI用于语言理解
Neurosymbolic AI for Situated Language Understanding
论文作者
论文摘要
近年来,数据密集型AI,尤其是自然语言处理和理解的领域,在大型数据集和深层神经网络的出现中,这一进步取得了重大进展,这些数据集和深层神经网络的出现使更典型的AI方法偏离了该领域。这些系统显然可以表现出复杂的语言理解或发电能力,但通常无法将其技能转移到以前从未遇到的情况。我们认为,计算位置的基础通过创建既是一个正式的现象的形式模型,又包含大量可利用的,适合任务的数据,用于培训新的,灵活的计算模型,从而为其中一些学习挑战提供了解决方案。我们的模型将经典AI的一些想法重新结合到神经肌智能的框架中,并使用交互式情况,事件和对象属性的多模式上下文建模。我们讨论了如何为各种AI学习挑战提供各种数据和多个级别的建模,包括学习如何与对象提供的互动,为新颖结构和配置学习语义,以及将这些学识的知识转移到新的对象和情况下。
In recent years, data-intensive AI, particularly the domain of natural language processing and understanding, has seen significant progress driven by the advent of large datasets and deep neural networks that have sidelined more classic AI approaches to the field. These systems can apparently demonstrate sophisticated linguistic understanding or generation capabilities, but often fail to transfer their skills to situations they have not encountered before. We argue that computational situated grounding provides a solution to some of these learning challenges by creating situational representations that both serve as a formal model of the salient phenomena, and contain rich amounts of exploitable, task-appropriate data for training new, flexible computational models. Our model reincorporates some ideas of classic AI into a framework of neurosymbolic intelligence, using multimodal contextual modeling of interactive situations, events, and object properties. We discuss how situated grounding provides diverse data and multiple levels of modeling for a variety of AI learning challenges, including learning how to interact with object affordances, learning semantics for novel structures and configurations, and transferring such learned knowledge to new objects and situations.