Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Let users choose between local/hosted inference & cloud APIs #129

Open
3 of 6 tasks
frgfm opened this issue Mar 15, 2024 · 4 comments
Open
3 of 6 tasks

Let users choose between local/hosted inference & cloud APIs #129

frgfm opened this issue Mar 15, 2024 · 4 comments

Comments

@frgfm
Copy link
Member

frgfm commented Mar 15, 2024

It makes sense that some users won't have compatible hardware to run LLMs locally. In that case, they might want to use external APIs for this.

It could be interesting to provide the following options:

@bright258
Copy link

I agree.

@frgfm
Copy link
Member Author

frgfm commented May 13, 2024

I agree.

@bright258 which LLM provider would you like to use most? We now have full support of Groq & Ollama

@bright258
Copy link

OpenAI

@frgfm
Copy link
Member Author

frgfm commented May 15, 2024

OpenAI

@bright258 Done :) Just merged #163

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants