-
-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for opensource models like LLama #47
Comments
I've created a PR # 52 which allows to use a custom API URL endpoint. For llama models, you can start the llama.cpp python web-server and then change the ai = AIChat(api_key='None', api_url='http://localhost:8000/v1/chat/completions', console=False) |
I merged #52 since that is a fair fix for a bug, but I am uncertain on how high of a priority to develop for open source models like llama.ccp is, particularly since they may have different APIs that are unique and don't play nice with each other. It is definitely within scope and on the roadmap, though. |
Does this PR work with (local) GPT4All models too? |
I don't think so, GPT4All API server does not have an implementation for the |
Actually, in the link you sent, L49 gives the I also got the chance to test and it seems that the GPT4All API server is compatible. |
@Xoeseko but if you take a closer look at |
Is it possible to configure opensource models like dolly,llama,,etc instead of openai models in simpleaichat and do prompting also.
The text was updated successfully, but these errors were encountered: