-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generation never starts: "context is null" #1
Comments
same here |
same issue @maxime-guerin-biprep @ThibautLEAUX any solution to this problem. |
did you use the release 1.1 or 2.0 ? |
Newest release, no permission pop up, android 13, One UI 5.1, Llama-7b |
Release 2.0 |
Okay, we will check it today |
Hello, we changed the targetSdkVersion to 31 it should work now. |
now it crashes after some time when I enter something. (maybe because of 6gb ram) also the opening of the model is very slow. it would be great to have a download progress indicator Device: Pixel 4 xl |
We only succeded to run it on a 8gb device, so it migth be why you crash. |
@maxime-guerin-biprep |
We did got this error when we loaded an alpaca model instead of a llama. |
@maxime-guerin-biprep is it possible to mention how you went about converting and getting the model. or linking the exact model you have been using.if you provide a temp link i'll download it and try out. |
We used Dalai a node js implementation here is a link |
@maxime-guerin-biprep can you link the ggml-model.bin file here temporarily, if possible. |
i'am downloading the gpt4all rigth now, i will try to convert it later |
So i just try the script convert-unversioned-ggml-to-ggml.py of the llama repo and succeded to use an old alpaca model using this file to convert it with this command |
I just tried the script convert-gpt4all-to-ggml.py and it worked |
@maxime-guerin-biprep could you upload the model bin files to huggingface temporarily so that i can test it at my end? |
@maxime-guerin-biprep i wrote this colab for conversion , but it doesn't work as intended , could you have a look if there is any issues : https://colab.research.google.com/drive/1F7ITFw7MAqEsYUN7ce7sG6rN-eAd8mnd?usp=sharing |
you need to put the .bin in a folder and make the first argumen the folder, you will have a .bin and .orig after, use the .bin |
updating my models to get rid of the bothersome "invalid model files" error, I now seem to get close, but the app just crashes after a moment, the last log is "trying main DONE instance of 'llama-context-params' I'm going to try using a different script to update my models, I'll let you know if that works. |
@maxime-guerin-biprep alpaca model now works. Can you add a "Stop Generation" button to the UI? Do you have any idea why the output keeps on going @maxime-guerin-biprep , it keeps on going and then crashes. |
@GeorvityLabs the default preprompt is not that perfect at all, it was just a simple one, we hope that people will make better ones so that it hallucinates less. |
@ThibautLEAUX yea it would be great to add a stop generation button so that we could manually stop when it starts to hallucinate? |
@ThibautLEAUX @maxime-guerin-biprep also was curious is the gtp4all working for you guys. |
Did you use the convert-gpt4all-to-ggml.py ? Or the same as alpaca ? |
@maxime-guerin-biprep |
We will also had an option to do the same as the instruct mode on llama.cop main. |
@maxime-guerin-biprep that is good to hear! looking forward to the update. |
@maxime-guerin-biprep Do you know why the model doesn't stop soon after giving the answer? Any ideas on how to fix this issue? |
@maxime-guerin-biprep Do you know why the model doesn't stop soon after giving the answer? Any ideas on how to fix this issue? |
@maxime-guerin-biprep @ThibautLEAUX any updates on the stop generation button? |
we will try to do it this week |
@maxime-guerin-biprep great, looking forward to testing it out |
@GeorvityLabs we released the stop button |
Cool , I'll do some testing and check how it is working. |
Add GGUF support
Load model,
Type hello
Press send
Open log
Log says:
[isolate 08:13:02] llama loaded
[isolate 08:13:02] main found: true
[isolate 08:13:02] trying main
[isolate 08:13:02] trying main DONE Instance of 'llama_context_params'
[isolate 08:13:02] context is null
Generation never starts
Samsung Z Flip4
8gb of ram
Snapdragon 8+ Gen1
The demo shows a oneplus device, probably an issue with Samsung phones, I'll check later on a different device.
The text was updated successfully, but these errors were encountered: