TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling
This is the official code release for the paper 'TiP-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling'.
Introduction
Tip-Adapter provides faster convergence and better performance than CLIP-Adapter by initializing the adapter with a cache model.
Implementation
Put tip_adapter_ImageNet.py
into clip's folder and run
python tip_adapter_ImageNet.py
you will get 65.51% on ImageNet validation set.
This repo will be completed in a few days.
Contributors
Peng Gao, Renrui Zhang
Acknowledgment
CLIP, CoOp and CLIP-Adapter