Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to select the gpt-3.5-turbo-16k model #102

Closed
anlek opened this issue Jul 27, 2023 · 4 comments
Closed

Ability to select the gpt-3.5-turbo-16k model #102

anlek opened this issue Jul 27, 2023 · 4 comments
Labels
enhancement New feature or request

Comments

@anlek
Copy link

anlek commented Jul 27, 2023

Is this request related to a problem? Please describe.

I've only started playing with this extension, however, I seem to hit the max context length.

Error: This model's maximum context length is 4097 tokens. However, you requested 5003 tokens (3979 in the messages, 1024 in the completion). Please reduce the length of the messages or completion.

Describe the solution you'd like

I'd love to be able to use the 16k version (or support auto selecting the 16k if input is large)

Additional context

I've tried to force the setting for "rubberduck.model": "gpt-3.5-turbo-16k" however it errors saying I'm not allowed to select a model not on the provided list (gpt-3.5-turbo or gpt-4).

@anlek anlek added the enhancement New feature or request label Jul 27, 2023
@anlek
Copy link
Author

anlek commented Jul 27, 2023

This might fix the issue reported on issue #92.

@wilrodriguez
Copy link

This might fix the issue reported on issue #92.

I don't think it's directly related. I believe #92 is caused by a race condition where the rubberduck ui panel seems to need to be in focus for things to generate correctly. Additionally, I have only been using gpt-4 and gpt-3.5-turbo in my configuration.

@micahnz
Copy link

micahnz commented Oct 4, 2023

I would also like to specify the exact model to use and do what to use it with gpt-3.5-turbo-16k but because the setting is a drop down it doesn't seem to let me type my own in even if I try to force it like above.

@lgrammel
Copy link
Contributor

lgrammel commented Oct 4, 2023

v1.17 supports gpt-4-32k and gpt-3.5-turbo-16k now.

@lgrammel lgrammel closed this as completed Oct 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants