-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable support for other LLMs #36
Comments
https://github.com/deep-diver/LLM-As-Chatbot looks interesting. in particular it's https://github.com/deep-diver/PingPong mechanism to isolate the prompt difference requirements. |
Note that if you use the openrouter.ai option, you can choose any model they support. To do so, add your openrouter_api_key: sk-or-v1.... You can then specify which model to use, you can pass AICODEBOT_MODEL as an environment variable. I'll work on adding that as a config option as well. |
Few questions:
Think thats the wrong approach tho, to run the LLMs within the core AICodebot code. I would suggest using nats as a queue/service discovery mechanism, like https://github.com/openbnet/troybot/blob/main/services/stt/nats_stt.py Benefits of using nats
Could do a docker compose to get the LLM + nats running.
Using the default prompts, falcon-7b is totally off the mark. seems like its giving us back the inputs. |
I understand and appreciate the importance of this now. @hanselke I just re-factored the layout of the commands, so your current code isn't compatible - I'll look into adding support for other LLMS soon! |
Disappointing results - nothing is as good as ChatGPT-4 so far |
Yeap. Was gonna try with https://github.com/bigscience-workshop/petals to see if that works.
Have work this week I need to get done tho, hopefully I’ll be able to find time next week.
… On 14 Aug 2023, at 2:01 AM, Nick Sullivan ***@***.***> wrote:
Disappointing results - nothing is as good as ChatGPT-4 so far
—
Reply to this email directly, view it on GitHub <#36 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAMJWFIBP24KOQD4HSWRYRDXVEI7FANCNFSM6AAAAAA2HU6S24>.
You are receiving this because you were mentioned.
|
@hanselke @gorillamania We built liteLLM to solve the problem mentioned in this issue Here's how litellm calls work from litellm import completion
## set ENV variables
os.environ["OPENAI_API_KEY"] = "openai key"
os.environ["COHERE_API_KEY"] = "cohere key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion("command-nightly", messages) If this is useful for this issue - I'd love to make a PR to help gorillamania (I noticed your using |
Claude 2 looks interesting
The text was updated successfully, but these errors were encountered: