论文标题
GL-CLEF:跨语言语言理解的全球对比学习框架
GL-CLeF: A Global-Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding
论文作者
论文摘要
由于当前方法的数据需求很高,因此对零击的跨语性口语理解(SLU)的关注已经增长,因为这种方法大大减少了人类注释的工作。但是,现有模型仅依赖共享参数,该参数只能在语言上执行隐式对齐。我们提出了全球 - 局部对比度学习框架(GL-CLEF),以解决这一缺点。具体而言,我们采用对比度学习,利用双语词典来构建同一话语的多语言观点,然后鼓励其表示形式比负面的示例对更相似,而否定示例对实现了跨语言的相似句子的明确表示。此外,GL-CLEF的关键步骤是拟议的本地和全局组件,它实现了精细的跨语性转移(即句子级局部意图转移,令牌级的局部插槽传输,以及跨意图和插槽的语义级别传递)。 Multiatis ++的实验表明,GL-CLEF实现了最佳性能,并成功地将跨语言的类似句子的表示形式更接近。
Due to high data demands of current methods, attention to zero-shot cross-lingual spoken language understanding (SLU) has grown, as such approaches greatly reduce human annotation effort. However, existing models solely rely on shared parameters, which can only perform implicit alignment across languages. We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming. Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly aligned representations of similar sentences across languages. In addition, a key step in GL-CLeF is a proposed Local and Global component, which achieves a fine-grained cross-lingual transfer (i.e., sentence-level Local intent transfer, token-level Local slot transfer, and semantic-level Global transfer across intent and slot). Experiments on MultiATIS++ show that GL-CLeF achieves the best performance and successfully pulls representations of similar sentences across languages closer.