论文标题
语义抽象:2D视觉模型的开放世界3D场景理解
Semantic Abstraction: Open-World 3D Scene Understanding from 2D Vision-Language Models
论文作者
论文摘要
我们研究开放世界3D场景的理解,这是一个需要代理商的任务家庭,以开放式词汇和外域视觉输入来推理其3D环境 - 对于机器人在非结构化的3D世界中运作的关键技能。为此,我们提出了语义抽象(SEMABS),该框架使2D视觉模型(VLMS)具有新的3D空间功能,同时保持其零击的稳健性。我们使用从剪辑中提取的相关图实现了这种抽象,并以语义不平衡的方式学习了这些抽象的3D空间和几何推理技能。我们演示了SEMABS对两个开放世界3D场景理解任务的有用性:1)完成部分观察到的对象,2)从语言描述中定位隐藏的对象。实验表明,SEMAB可以从有限的3D合成数据进行训练中概括为新型词汇,材料/照明,类和域(即现实世界扫描)。代码和数据可从https://semantic-abstraction.cs.columbia.edu/获得。
We study open-world 3D scene understanding, a family of tasks that require agents to reason about their 3D environment with an open-set vocabulary and out-of-domain visual inputs - a critical skill for robots to operate in the unstructured 3D world. Towards this end, we propose Semantic Abstraction (SemAbs), a framework that equips 2D Vision-Language Models (VLMs) with new 3D spatial capabilities, while maintaining their zero-shot robustness. We achieve this abstraction using relevancy maps extracted from CLIP, and learn 3D spatial and geometric reasoning skills on top of those abstractions in a semantic-agnostic manner. We demonstrate the usefulness of SemAbs on two open-world 3D scene understanding tasks: 1) completing partially observed objects and 2) localizing hidden objects from language descriptions. Experiments show that SemAbs can generalize to novel vocabulary, materials/lighting, classes, and domains (i.e., real-world scans) from training on limited 3D synthetic data. Code and data is available at https://semantic-abstraction.cs.columbia.edu/