-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
what ever I try no model loads #65
Comments
You need to download the q4_1 file, not q4_0. |
I used the following link. https://huggingface.co/Pi3141/alpaca-7b-native-enhanced/blob/main/ggml-model-q4_1.bin |
Where exactly did you get the models from? |
And you're using q4_1, right? |
Can you try Alpaca native enhanced? |
Maybe you can show the terminal log if you are using mac or linux, that will be more clear. |
Actually I can't. Llama.cpp doesn't show how many tokens of the prompt has been processed. What I'll do to fix people reporting that the model is broken is that I will make it a rule that people cannot open an issue if they haven't waited at least 1 hour for a response from the model to make sure that it's not just their computer. Because if a model can't be load, the app will notify you. It only freezes in rare edge cases. |
I tried all these models and none of them works, everything just sais couldn't load model. How do I find the terminall logs? I am using the macOS arm64 build. |
Yeah good luck to them finding a different tool thats faster than llama.cpp. If it takes that long for llama.cpp to run for them, then their CPU spec is probably not good, thus it would also make sense that they wouldn't have a GPU or the GPU won't be powerful enough. |
where can I find the terminal logs on Mac? |
Sorry, I didn't test it on Mac before, I just assume when we run the command on terminal, it will display some info like this
|
That's normal, it's loading the model. Give it some time. |
What's the difference with q4_1.bin q4_2.bin q4_3.bin etc? |
I downloaded the models from the link provided on version1.05 release page. But what ever I try it always sais couldn't load model. I use the ggml-model-q4_0.bin or the ggml-model-q4_0.bin files but nothing loads. I tried windows and Mac. It doesn't give me a proper error message just sais couldn't load model.
The text was updated successfully, but these errors were encountered: