-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add minigpt4 gradio demo and training script #1758
Conversation
Hi @hmtbgc, We'd like to express our appreciation for your valuable contributions to the mmpretrain. Your efforts have significantly aided in enhancing the project's quality. If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID". Thanks again for your awesome contribution, and we're excited to have you as part of our community! |
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Implement gradio demo and training script for MiniGPT-4.
Modification
minigpt4_demo.py
and another auxiliary file (conversation.py
) under projects/gradio_demo.tools/train.py
to support DeepSpeed training.mmpretrain/datasets/minigpt4_dataset.py
).minigpt4.py
to train properly.minigpt-4_baichuan-7b_caption.py
) for training minigpt4. This script replaces vicuna with other LLM to improve Chinese-generating ability.BC-breaking (Optional)
Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
No
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
Training guide
Download dataset from this url. There are two datasets: minigpt4 and llava. Both could be used individually for training. Modify line 37 of
configs/minigpt4/minigpt-4_baichuan-7b_caption.py
as path to minigpt4/llava dataset, and line 38 asminigpt4_official_data.json
/llava_official_data.json
.Download baichuan7b weight from this url and modify line 104 and 108 as path to baichuan7b weight.
Run the following command in terminal:
Each GPU may take up 28GB memory. It take ~34mins to train 6 epochs. 4 GPUs are used for training by default. If you only have one GPU, the learning rate should be reduced appropriately (e.g. from 1e-3 to 5e-4).
After training, you will get a file named
mp_rank_00_model_states.pt
underwork_dirs/minigpt-4_baichuan-7b_caption/epoch_6.pth
. Post processmp_rank_00_model_states.pt
with the following python code:Testing guide
Examples
Checklist
Before PR:
After PR: