-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What will genai
do if there are same model names but in different LLM adapters?
#31
Comments
Good point. So, here is the current thinking:
|
Recently I encountered similar problem, I had fine-tuned model in OpenAI platform, by default they starts with prefix
|
@AdamStrojek Lookat the latest version v0.1.15, we have now |
All, I am closing this issue as this was implemented with the |
Bug description
In other LLM libraries you have a type for a specific adapter, and then you specify model name as a string.
However, in
genai
there is only one parameter - model name.While in practice it's probably not a big issue, but in my opinion this is a bit wrong.
The main question that came in my mind is: what if both company X and Y release a model under the same name A?
You know, there two OpenAI's: one from OpenAI company, other is Azure...
The other very important thing:
Two main applications to load LLM models locally are: ollama and GPT4All. It's certainly possible to have same model names in both ollama and GPT4All.
I know you don't support GPT4All now, but I noticed in the algorithm for
AdapterKind::from_model
that if model name doesn't fit into any criteria, then it will be treated as a model from ollama. This might introduce problems if you add GPT4AllThe text was updated successfully, but these errors were encountered: