You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using deepseekcoder:1.3b via Ollama completion sometimes includes "<|end▁of▁sentence|><|begin▁of▁sentence|>".
All in all, using Ollama as backend changes behaviour from using it with --model DeepseekCoder-1.3B
Some of it might be expected since some settings are different and it might be me having messed things up but I just followed the https://tabby.tabbyml.com/docs/references/models-http-api/ollama/ guide to set it up
Would be really nice if someone could me to get things work properly, since I need my VRAM but do not like to start/stop tabby all the time
When using deepseekcoder:1.3b via Ollama completion sometimes includes "<|end▁of▁sentence|><|begin▁of▁sentence|>".
All in all, using Ollama as backend changes behaviour from using it with --model DeepseekCoder-1.3B
Some of it might be expected since some settings are different and it might be me having messed things up but I just followed the https://tabby.tabbyml.com/docs/references/models-http-api/ollama/ guide to set it up
Would be really nice if someone could me to get things work properly, since I need my VRAM but do not like to start/stop tabby all the time
This my config.toml:
The text was updated successfully, but these errors were encountered: