You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is this request related to a problem? Please describe.
I've only started playing with this extension, however, I seem to hit the max context length.
Error: This model's maximum context length is 4097 tokens. However, you requested 5003 tokens (3979 in the messages, 1024 in the completion). Please reduce the length of the messages or completion.
Describe the solution you'd like
I'd love to be able to use the 16k version (or support auto selecting the 16k if input is large)
Additional context
I've tried to force the setting for "rubberduck.model": "gpt-3.5-turbo-16k" however it errors saying I'm not allowed to select a model not on the provided list (gpt-3.5-turbo or gpt-4).
The text was updated successfully, but these errors were encountered:
I don't think it's directly related. I believe #92 is caused by a race condition where the rubberduck ui panel seems to need to be in focus for things to generate correctly. Additionally, I have only been using gpt-4 and gpt-3.5-turbo in my configuration.
I would also like to specify the exact model to use and do what to use it with gpt-3.5-turbo-16k but because the setting is a drop down it doesn't seem to let me type my own in even if I try to force it like above.
Is this request related to a problem? Please describe.
I've only started playing with this extension, however, I seem to hit the max context length.
Describe the solution you'd like
I'd love to be able to use the 16k version (or support auto selecting the 16k if input is large)
Additional context
I've tried to force the setting for
"rubberduck.model": "gpt-3.5-turbo-16k"
however it errors saying I'm not allowed to select a model not on the provided list (gpt-3.5-turbo
orgpt-4
).The text was updated successfully, but these errors were encountered: