Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the problem? #6650

Open
1 task done
xinghuan2hao opened this issue Jan 11, 2025 · 0 comments
Open
1 task done

What is the problem? #6650

xinghuan2hao opened this issue Jan 11, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@xinghuan2hao
Copy link

Describe the bug

My model won't load.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

After I select the TheBloke_Chronos-Hermes-13B-SuperHOT-8K-GPTQ model and load it after selecting the llama.cpp in the model loader, I get an error

Screenshot

_11-1-2025_173730_127 0 0 1

Logs

Traceback (most recent call last):

File "F:\AI\Text-generation-webui-C\modules\ui_model_menu.py", line 232, in load_model_wrapper


shared.model, shared.tokenizer = load_model(selected_model, loader)

                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI\Text-generation-webui-C\modules\models.py", line 93, in load_model


output = load_func_map[loader](model_name)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI\Text-generation-webui-C\modules\models.py", line 277, in llamacpp_loader


model_file = sorted(Path(f'{shared.args.model_dir}/{model_name}').glob('*.gguf'))[0]

             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^

System Info

System: WIN11
GPU: Nvidia's 4080 mobile
@xinghuan2hao xinghuan2hao added the bug Something isn't working label Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant