Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backbone path? #24

Open
clyy21 opened this issue Mar 26, 2024 · 1 comment
Open

Backbone path? #24

clyy21 opened this issue Mar 26, 2024 · 1 comment

Comments

@clyy21
Copy link

clyy21 commented Mar 26, 2024

Hi, thank you for sharing this PLIP model! It is a great work! However, I am a newbie in model training and would like to seek some help from you.

I am confused about what the PC_DEFAULT_BACKBONE in the config.env file should be. I assumed it should be a pre-trained model, so I assigned it with the path to the model but it seems like I am wrong. I am trying to run the fine_tune_training.py but now I do not know how to continue further. Could you give me some assistance? I would appreciate that! :)

0cb227dcf3e6ccc9ba46807eafbfb94

@Himanshunitrr
Copy link

That means that the keys mismatch between the model initialized and the state dictionary loaded. You can either by manual inspection create a mapping or you can use the hugging face API like:

    from PIL import Image
    from transformers import CLIPProcessor, CLIPModel
    
    model = CLIPModel.from_pretrained("vinid/plip")
    processor = CLIPProcessor.from_pretrained("vinid/plip")
    
    image = Image.open("images/image1.jpg")
    
    inputs = processor(text=["a photo of label 1", "a photo of label 2"],
                       images=image, return_tensors="pt", padding=True)
    
    outputs = model(**inputs)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants