Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selected OpenRouter model not persisted #78

Open
MischaU8 opened this issue Jul 8, 2024 · 5 comments
Open

Selected OpenRouter model not persisted #78

MischaU8 opened this issue Jul 8, 2024 · 5 comments

Comments

@MischaU8
Copy link
Contributor

MischaU8 commented Jul 8, 2024

When selecting OpenRouter as AI provider, the selected model isn't persisted across reloads. After clicking "Yes" on the Custom Endpoint Reconnect modal, it will always falls back to the default "mistralai/mistral-7b-instruct" model. The OpenRouter key is persisted however, I'd expect the same for the model choice.

@LostRuins
Copy link
Owner

Currently model selection for custom endpoints is not saved - in many backends, the model list is fetched from an external parties and subject to frequent changes.

@LostRuins
Copy link
Owner

It's a consideration in future. For now, the model will need to be manually reselected

@xloem
Copy link

xloem commented Dec 2, 2024

I found this issue wanting to share an openrouter chat I had with a therapist in a way they could access. I found lite.koboldai.net looking for how to do this. It would be helpful if the AI backend and the model selection were persisted with saved json files.

Things that could make this easier would be:

  • a checkbox to enable or disable inclusion
  • defaulting to a different model in case of an error

Additionally, openrouter.ai has a "general" model openrouter/auto that will choose a model that exists based on the prompt content, although I have not tried this model, and models that are free end in the suffix :free.

As well, lite.koboldai.net looks like a better service for chatting than openrouter. It could be helpful to have a little onboarding information, for example to help people migrate to what is normal to use here instead of a system prompt. (edit: all the new openai-compatible services autoformat system, user, and assistant messages using per-model templates serverside. but the intent of this reply is to support letting users persist AI backend settings they have selected.)

@henk717
Copy link
Collaborator

henk717 commented Dec 2, 2024

My two cents on the model thing, i think this would cause issues especially if it would cause the local KoboldCpp bundled Lite to suddenly connect to cloud instances. I'm not opposed to it but would have to be opt in.

As for the info, in our UI the memory field in the context menu is raw text so you can format things like your system instructions (If you need them, a lot of models don't for default behavior) and some example turns in there exactly as you would like.

Ours has placeholders so {{[INPUT]}} and {{[OUTPUT]}} or even {{[SYSTEM]}} can be written there and it will automatically be replaced with what you have in the settings. In our scenarios menu we have a lot of examples.

The exact formatting is task and model specific so its hard to write unified info for.

@LostRuins
Copy link
Owner

AI backend is not automatically re-selected, but if you're previously connected to a custom endpoint, Lite will prompt if you wish to reconnect.
image
pressing yes will bring you to the previously chosen backend. For now, the models will have to be re-selected, as they are fetched afresh and may change from time to time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants