-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error making gguf: KeyError: '<|user|>' #14
Comments
Hi! You can get the token id by |
Hi, How to fix it ? thanks! |
Have you updated to our most recent model files? Also, please use |
@bys0318 thank you, it appears that the token id is:
Is this correct? in llama.cpp:
I did this (find the lines starting at 3711) and replace:
|
This is correct. Thanks for sharing! |
Thank you very much, the format conversion was successful. |
Thanks for the detailed steps! |
System Info / 系統信息
transformers: 4.44.0
llama.cpp: latest
Hi, when I try to make a gguf I get this error:
Do you know how to fix this?
On huggingface someone else has the same problem:
https://huggingface.co/THUDM/LongWriter-glm4-9b/discussions/1#66bc33eccd16fda66e7caa1f
But I don't know how to apply this solution:
Is the EOT even needed?
Thank you!
Who can help? / 谁可以帮助到您?
No response
Information / 问题信息
Reproduction / 复现过程
With llama.cpp:
python convert_hf_to_gguf.py /home/david/llm/LongWriter-glm4-9b --outtype f32
Here is the code:
Expected behavior / 期待表现
For it to make a quantization.
The text was updated successfully, but these errors were encountered: