-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use existing llama.cpp install #9
Comments
Just leave out step 2 of installation. I think coqui engine does not run in realtime on a Mac though. |
I did leave out step 2 but then I get an error when I try to run: |
Python import of llama_cpp fails, that means your environment does not have working python bindings for your llama.cpp. |
Thank you. I did get it to work following your comment. Like the other M1 person, I do get stuttering. It's a shame because the voice quality is excellent and the latency is rather short. Hope a future update might solve this for us! |
I managed to get this working with the Gemma 2 model. However, I am having trouble setting the parameters. It's working but doesn't seem optimal. I see them in creation_params.json, and also in coqui_engine.py. Would it be possible for LocalAiVoiceChat to utilize Llama.cpp's server endpoint (instead)? Or would that require a lot of rewriting of the code? |
I like that idea, I'll have to look into that. |
Great. It seems like a more standard approach these days. I'd be happy to test whatever. As mentioned above I'm on a M1 Mac so this isn't the fastest setup but it's now working pretty well with no stuttering. |
I've been using llama.cpp for quite a while (M1 Mac). Is there a way I can get ai_voicetalk_local.py to point to that installation instead of reinstalling it here? Sorry, newbie question...
The text was updated successfully, but these errors were encountered: