论文标题

结构化提示:将缩放在上下文中学习到1,000个示例

Structured Prompting: Scaling In-Context Learning to 1,000 Examples

论文作者

Hao, Yaru, Sun, Yutao, Dong, Li, Han, Zhixiong, Gu, Yuxian, Wei, Furu

论文摘要

大型语言模型已经表现出有趣的文化学习能力,在没有更新参数的情况下实现了有希望的零和很少的表现。但是,常规的文化学习通常受长度限制的限制,从而使其无效地从大量示例中吸收监督。为了超越几次镜头,我们引入了结构化提示,该提示将长度限制和范围内在学习范围学习到数千个示例。具体而言,演示示例分别用精心设计的位置嵌入方式编码,然后使用重新固定的注意机制共同参加了测试示例。因此,我们可以相对于长度扩展具有线性复杂性的示例数,而不是二次复杂性。对一组任务的实验结果表明,随着演示示例的数量的增加,我们的方法改善了终点的表现,并减少了对常规内在学习的评估差异。代码已在https://aka.ms/structred-prompting上发布。

Large language models have exhibited intriguing in-context learning capability, achieving promising zero- and few-shot performance without updating the parameters. However, conventional in-context learning is usually restricted by length constraints, rendering it ineffective to absorb supervision from a large number of examples. In order to go beyond few shots, we introduce structured prompting that breaks the length limit and scales in-context learning to thousands of examples. Specifically, demonstration examples are separately encoded with well-designed position embeddings, and then they are jointly attended by the test example using a rescaled attention mechanism. So we can scale the number of exemplars with linear complexity instead of quadratic complexity with respect to length. Experimental results on a diverse set of tasks show that our approach improves end-task performance and reduces evaluation variance over conventional in-context learning as the number of demonstration examples increases. Code has been released at https://aka.ms/structured-prompting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源