论文标题
有条件地学习视觉模型
Conditional Prompt Learning for Vision-Language Models
论文作者
论文摘要
随着强大的预训练视觉模型(如剪辑)的兴起,必须研究将这些模型适应下游数据集的方法至关重要。一种名为“上下文优化”(COP)的最近提出的方法介绍了及时学习的概念 - NLP的最新趋势 - 用于调整预训练的视力语言模型的视觉领域。具体而言,Coop在提示中将上下文单词变成了一组可学习的向量,并且只有几个标记的学习图像可以实现对强度调整的手动提示的巨大改进。在我们的研究中,我们确定了一个关键问题:学习的环境无法推广到同一数据集中的更广泛的未见类别,这表明Coop过于贴上培训期间观察到的基础类别。为了解决该问题,我们提出了条件上下文优化(COCOUP),通过进一步学习轻量级神经网络来为每个图像生成一个输入条件令牌(向量),从而扩展了Coop。与Coop的静态提示相比,我们的动态提示会适应每个实例,因此对班级变化敏感。广泛的实验表明,Cocoop比Coop可以看不见的类别概括得多,甚至显示出一个数据集以外的有希望的可传递性。并产生更强的域概括性能。代码可在https://github.com/kaiyangzhou/coop上找到。
With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning -- a recent trend in NLP -- to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at https://github.com/KaiyangZhou/CoOp.