论文标题
微调视觉语言模型的局部潜在更新
Localized Latent Updates for Fine-Tuning Vision-Language Models
论文作者
论文摘要
尽管诸如剪辑之类的大规模预训练的视觉模型对许多任务显示出令人印象深刻的概括能力,但仍然需要微调它们以改善特定数据集的性能。这样做时,希望更新模型很快,并且该模型不会在数据集外的数据上丢失其功能,就像经典的微调方法一样。在这项工作中,我们建议使用轻型适配器,它仅更新接近所见数据点的模型预测。我们在几次学习的背景下证明了这种相对简单的方法的有效性和速度,在训练期间,我们在培训期间看到和看不见的结果都可以与我的最新状态相提并论。
Although massive pre-trained vision-language models like CLIP show impressive generalization capabilities for many tasks, still it often remains necessary to fine-tune them for improved performance on specific datasets. When doing so, it is desirable that updating the model is fast and that the model does not lose its capabilities on data outside of the dataset, as is often the case with classical fine-tuning approaches. In this work we suggest a lightweight adapter, that only updates the models predictions close to seen datapoints. We demonstrate the effectiveness and speed of this relatively simple approach in the context of few-shot learning, where our results both on classes seen and unseen during training are comparable with or improve on the state of the art.