Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output sometimes weird with Ollama backend #19

Open
ruffi123456789 opened this issue Aug 20, 2024 · 1 comment
Open

Output sometimes weird with Ollama backend #19

ruffi123456789 opened this issue Aug 20, 2024 · 1 comment

Comments

@ruffi123456789
Copy link

When using deepseekcoder:1.3b via Ollama completion sometimes includes "<|end▁of▁sentence|><|begin▁of▁sentence|>".

All in all, using Ollama as backend changes behaviour from using it with --model DeepseekCoder-1.3B
Some of it might be expected since some settings are different and it might be me having messed things up but I just followed the https://tabby.tabbyml.com/docs/references/models-http-api/ollama/ guide to set it up

Would be really nice if someone could me to get things work properly, since I need my VRAM but do not like to start/stop tabby all the time

This my config.toml:

[model.completion.http]
kind = "ollama/completion"
model_name = "deepseek-coder:1.3b"
api_endpoint = "http://localhost:11434"
prompt_template = "<|fim▁begin|>{prefix}<|fim▁hole|>{suffix}<|fim▁end|>"

[model.embedding.http]
kind = "ollama/embedding"
model_name = "nomic-embed-text"
api_endpoint = "http://localhost:11434"
@icycodes
Copy link
Member

Hi, @ruffi123456789.
This repo is for releasing vim-tabby. Please open this issue in the main Tabby repo: https://github.com/TabbyML/tabby.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants