Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: exception while rasing a another exception in convert_llama_ggml_to_gguf script #8929

Closed
farbodbj opened this issue Aug 8, 2024 · 1 comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

Comments

@farbodbj
Copy link
Contributor

farbodbj commented Aug 8, 2024

What happened?

When trying to convert this GGML model from hugging face to GGUF, the script encountered an error in this function but when trying to raise the ValueError it encountered another exception.
how I called the python script:
python convert_llama_ggml_to_gguf.py --input models/bigtrans-13b.ggmlv3.q6_K --output q6_K
as it is obvious an input with wrong data type (int instead of GGMLQuantizationType) has been passed to this function. I fixed this issue in #8928

Name and Version

version: 3535 (1e6f655)

What operating system are you seeing the problem on?

Linux

Relevant log output

line 22, in quant_shape_from_byte_shape
    raise ValueError(f"Quantized tensor bytes per row ({shape[-1]}) is not a multiple of {quant_type.name} type size ({type_size})")
                                                                                          ^^^^^^^^^^^^^^^
AttributeError: 'int' object has no attribute 'name'
@farbodbj farbodbj added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Aug 8, 2024
@farbodbj
Copy link
Contributor Author

The issue is now fixed after #8928 has been merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
Projects
None yet
Development

No branches or pull requests

1 participant