-
Notifications
You must be signed in to change notification settings - Fork 10.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using convert.py with a fine tuned phi-2 #5009
Comments
Having the same kind of issue with falcon finetuned 7b bf16 model |
Do you have reason to believe it is supported? |
Same issue with phi-2. |
I suggest using When new models come out on HuggingFace, their conversion is usually added in |
Works for me. Thanks a lot! |
I have a similar issue, after fine-tuning, I get Doing some more debugging in the
Whereas If it's mapping old keys to new ones, I'm happy to work on a MR, but there seems to be some info lost there. |
@tgalery The line numbers in the errors you got seem different than those in the latest commit.
The fix you're describing is already implemented, but maybe your local checkout of This is what you should see in llama.cpp/convert-hf-to-gguf.py Lines 1287 to 1296 in 5cdb371
If it still doesn't work with the latest version, please do tell. Hope this helps :) |
Oh I see, llama.ccp was pulled inside a python repo via |
@compilade thanks for the explanation. It works like a charm. Just a question, I'm working on a pipeline and for some model types, say |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
We are loading phi-2 from HF using load_in_8bit=True and torch_dtype=torch.float16, then we fine tune it and finally we save it locally.
When running convert.py ./phi-2 we get this error:
File "/content/convert.py", line 764, in convert
data_type = SAFETENSORS_DATA_TYPES[info['dtype']]
KeyError: 'I8'
If we try the same using load_in_8bit=False then we get:
File "/content/convert.py", line 257, in loadHFTransformerJson
f_norm_eps = config["rms_norm_eps"],
KeyError: 'rms_norm_eps'
how to generate a GGUF from a fine tuned phi-2 ? many thanks
The text was updated successfully, but these errors were encountered: