One-click deployment of local LLMs, that is Ollama.
Decide which LLM you want to deploy (here's a list for supported LLM), say, mistral:
$ ollama run mistral
Or,
$ docker exec -it ollama ollama run mistral
- Go to 'Settings > Model Providers > Models to be added > Ollama'.
Base URL: Enter the base URL where the Ollama service is accessible, like,
http://<your-ollama-endpoint-domain>:11434
.
- Use Ollama Models.