You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would be great to have a support of the llama.cpp webserver api, is a openAI dropin api. And then also to support multiple models on the llama.cpp.
i use llama.cpp in my local network and configured it with more than 1 model. so it would be cool if its possible to easy switch these models from the gui like its right now between openai and groq
The text was updated successfully, but these errors were encountered:
userman2213
changed the title
support for llama.cpp webserver
Feature Request: support for llama.cpp webserver multiple models
Dec 18, 2024
Would be great to have a support of the llama.cpp webserver api, is a openAI dropin api. And then also to support multiple models on the llama.cpp.
i use llama.cpp in my local network and configured it with more than 1 model. so it would be cool if its possible to easy switch these models from the gui like its right now between openai and groq
The text was updated successfully, but these errors were encountered: