Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: support for llama.cpp webserver multiple models #103

Open
userman2213 opened this issue Dec 18, 2024 · 0 comments
Open

Comments

@userman2213
Copy link

Would be great to have a support of the llama.cpp webserver api, is a openAI dropin api. And then also to support multiple models on the llama.cpp.
i use llama.cpp in my local network and configured it with more than 1 model. so it would be cool if its possible to easy switch these models from the gui like its right now between openai and groq

@userman2213 userman2213 changed the title support for llama.cpp webserver Feature Request: support for llama.cpp webserver multiple models Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant