4.7 Article

How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?

期刊

出版社

SPRINGER
DOI: 10.1007/s11263-023-01895-7

关键词

CLIP; OOD detection; Fine-tuning; Multi-modality; Vision-language models; Prompt learning; Few-shot learning; Adaptor

向作者/读者索取更多资源

This paper investigates the OOD detection performance of CLIP model fine-tuning. By framing OOD detection as multi-modal concept matching, a connection between fine-tuning methods and various OOD scores is established. The results suggest that choosing the appropriate OOD scores is crucial for CLIP model fine-tuning, with prompt learning demonstrating state-of-the-art OOD detection performance.
Recent large vision-language models such as CLIP have shown remarkable out-of-distribution (OOD) detection and generalization performance. However, their zero-shot in-distribution (ID) accuracy is often limited for downstream datasets. Recent CLIP-based fine-tuning methods such as prompt learning have demonstrated significant improvements in ID classification and OOD generalization where OOD labels are available. Nonetheless, it remains unclear whether the model is reliable to semantic shifts without OOD labels. In this paper, we aim to bridge the gap and present a comprehensive study to understand how fine-tuning impact OOD detection for few-shot downstream tasks. By framing OOD detection as multi-modal concept matching, we establish a connection between fine-tuning methods and various OOD scores. Our results suggest that a proper choice of OOD scores is essential for CLIP-based fine-tuning. In particular, the maximum concept matching (MCM) score provides a promising solution consistently. We also show that prompt learning demonstrates the state-of-the-art OOD detection performance over the zero-shot counterpart.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据