Bug: exception while rasing a another exception in convert_llama_ggml_to_gguf script #8929
Labels
bug-unconfirmed
low severity
Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
What happened?
When trying to convert this GGML model from hugging face to GGUF, the script encountered an error in this function but when trying to raise the
ValueError
it encountered another exception.how I called the python script:
python convert_llama_ggml_to_gguf.py --input models/bigtrans-13b.ggmlv3.q6_K --output q6_K
as it is obvious an input with wrong data type (int instead of GGMLQuantizationType) has been passed to this function. I fixed this issue in #8928
Name and Version
version: 3535 (1e6f655)
What operating system are you seeing the problem on?
Linux
Relevant log output
The text was updated successfully, but these errors were encountered: