You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for sharing this PLIP model! It is a great work! However, I am a newbie in model training and would like to seek some help from you.
I am confused about what the PC_DEFAULT_BACKBONE in the config.env file should be. I assumed it should be a pre-trained model, so I assigned it with the path to the model but it seems like I am wrong. I am trying to run the fine_tune_training.py but now I do not know how to continue further. Could you give me some assistance? I would appreciate that! :)
The text was updated successfully, but these errors were encountered:
That means that the keys mismatch between the model initialized and the state dictionary loaded. You can either by manual inspection create a mapping or you can use the hugging face API like:
from PIL import Image
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("vinid/plip")
processor = CLIPProcessor.from_pretrained("vinid/plip")
image = Image.open("images/image1.jpg")
inputs = processor(text=["a photo of label 1", "a photo of label 2"],
images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
Hi, thank you for sharing this PLIP model! It is a great work! However, I am a newbie in model training and would like to seek some help from you.
I am confused about what the PC_DEFAULT_BACKBONE in the config.env file should be. I assumed it should be a pre-trained model, so I assigned it with the path to the model but it seems like I am wrong. I am trying to run the fine_tune_training.py but now I do not know how to continue further. Could you give me some assistance? I would appreciate that! :)
The text was updated successfully, but these errors were encountered: