We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
My model won't load.
After I select the TheBloke_Chronos-Hermes-13B-SuperHOT-8K-GPTQ model and load it after selecting the llama.cpp in the model loader, I get an error
Traceback (most recent call last): File "F:\AI\Text-generation-webui-C\modules\ui_model_menu.py", line 232, in load_model_wrapper shared.model, shared.tokenizer = load_model(selected_model, loader) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\Text-generation-webui-C\modules\models.py", line 93, in load_model output = load_func_map[loader](model_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\AI\Text-generation-webui-C\modules\models.py", line 277, in llamacpp_loader model_file = sorted(Path(f'{shared.args.model_dir}/{model_name}').glob('*.gguf'))[0] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
System: WIN11 GPU: Nvidia's 4080 mobile
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
My model won't load.
Is there an existing issue for this?
Reproduction
After I select the TheBloke_Chronos-Hermes-13B-SuperHOT-8K-GPTQ model and load it after selecting the llama.cpp in the model loader, I get an error
Screenshot
Logs
System Info
System: WIN11 GPU: Nvidia's 4080 mobile
The text was updated successfully, but these errors were encountered: