Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First Test and feedback, Pls add the possibility to connect to an independant running LLama server URL #1

Open
ClaudeStabile opened this issue Nov 22, 2023 · 0 comments

Comments

@ClaudeStabile
Copy link

Dele,,

This is my first test with this plugin.
Please : do not use that with DOCKER !!!
Console issue :
Console language issue : my console is in French
screenshot12

This is downloading a Hugging face model. Please do no do that on your docker container as it will be a killer.... app. We need to have a separate volume so large files are mounted outside the container.
This is CPU based so will not be terrific for performances

###Perfect world i dream about :

  • What i Expect is just enterring the URL of a private server as https:chatgpt.free-solutions.ch Not download locally the model or keeping that as an additionnal option
  • Please try out with my server https://chatgpt.free-solutions.ch If we have this we can plug openfire to any IA server.

FYI : I am also capable to build Llama full GPU docker containers & servers. For now i run this at home but if we have serious sponsors we can do a Llama servers in datacenter. Cost for server, i quote that 30-70kCHF/1U Box with 48-96GB GPU nvidia. I have the supplier

Thanks for doing this great JOB :)
I need to have the possiblity to enter my sever URL as https://chatgpt.free-solutions.ch or http://chatgpt.free-solutions.ch:8080

Congrats

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant