You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does anyone know how to partly finetune the CLIP model? For example,
only the attention layer, such as, query and key projection layers are finetuned while value projection layers is frozen.
only several attention layers are finetuned while other layers are frozen.
Some works implemented the above operation based on the original CLIP code. OpenCLIP uses timm model so help is needed
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Does anyone know how to partly finetune the CLIP model? For example,
Some works implemented the above operation based on the original CLIP code. OpenCLIP uses timm model so help is needed
Beta Was this translation helpful? Give feedback.
All reactions