Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exceeding maximum context #10

Open
diguardiag opened this issue Apr 4, 2023 · 2 comments
Open

Exceeding maximum context #10

diguardiag opened this issue Apr 4, 2023 · 2 comments

Comments

@diguardiag
Copy link

Ran the example, ingested the state_of_the_union, asked a few questions and every single time I was getting this error:

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4231 tokens (3975 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

@TychoML
Copy link

TychoML commented Apr 6, 2023

I don't know if it is really related, but for me the issue was resolved after specifying the openai model to use in query_data.py:
llm=OpenAI(model_name="gpt-3.5-turbo", temperature=0)

Without that, it defaults to using text-davinci which is way more expensive. You can check which model is used by referring to https://platform.openai.com/account/usage

@AssistMoli
Copy link

Agree with TychoML. It solves for my by substituting with model_name="gpt-3.5-turbo".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants