Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to fine-tune the llama-adapter-v2 for llama-2 7b models. #129

Open
Harsh-raj opened this issue Nov 11, 2023 · 1 comment
Open

How to fine-tune the llama-adapter-v2 for llama-2 7b models. #129

Harsh-raj opened this issue Nov 11, 2023 · 1 comment

Comments

@Harsh-raj
Copy link

I am trying to use llama-adapter-v2 for llama2-7b model but there is got to be differences in the fine tuning process due to changes in llama model to llama 2 models. how to do the fine tuning with llama 2 instead of llama model

@gaopengpjlab
Copy link

gaopengpjlab commented Nov 11, 2023

Please try the upgraded llama-adapter-v2 model SPHINX in the following link:

https://github.com/Alpha-VLLM/LLaMA2-Accessory/tree/main/SPHINX

SPHINX using 1440/288 visual tokens and LLaMA2-13B while LLaMa-Adapter-V2 only use 10 visual tokens and LLaMa-7B.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants