Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finetune model inference error #124

Open
thisurawz1 opened this issue Nov 20, 2024 · 3 comments
Open

Finetune model inference error #124

thisurawz1 opened this issue Nov 20, 2024 · 3 comments

Comments

@thisurawz1
Copy link

when I try to do the inference after fine-tuning getting this error.

`import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init

def inference():
disable_torch_init()

# Video Inference
modal = 'video'
modal_path = 'assets/cat_and_chicken.mp4' 
instruct = 'What animals are in the video, what are they doing, and how does the video feel?'
# Reply:
# The video features a kitten and a baby chick playing together. The kitten is seen laying on the floor while the baby chick hops around. The two animals interact playfully with each other, and the video has a cute and heartwarming feel to it.

# Image Inference
modal = 'image'
modal_path = 'assets/sora.png'
instruct = 'What is the woman wearing, what is she doing, and how does the image feel?'
# Reply:
# The woman in the image is wearing a black coat and sunglasses, and she is walking down a rain-soaked city street. The image feels vibrant and lively, with the bright city lights reflecting off the wet pavement, creating a visually appealing atmosphere. The woman's presence adds a sense of style and confidence to the scene, as she navigates the bustling urban environment.

model_path = '/home/bingxing2/home/scx6md1/sep20vl2/VideoLLaMA2/work_dirs/videollama2/finetune_oeqr128nosm1'
# Base model inference (only need to replace model_path)
# model_path = 'DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base'
model, processor, tokenizer = model_init(model_path)
output = mm_infer(processor[modal](modal_path), instruct, model=model, tokenizer=tokenizer, do_sample=False, modal=modal)

print(output)

if name == "main":
inference()`

error

image

@Danielement321
Copy link

I've met the same problem. If you trained using lora, maybe you should add 'lora' to the name of the model folder. The code judges the type of the model by its name.

@thisurawz1
Copy link
Author

can you give an example

I've met the same problem. If you trained using lora, maybe you should add 'lora' to the name of the model folder. The code judges the type of the model by its name.

@Danielement321
Copy link

For example, change your model path folder name from 'finetune_oeqr128nosml' to 'finetune_oeqr128nosml_lora' if it is trained using lora.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants