Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4153 tokens (233 in your prompt; 3920 for the completion). Please reduce your prompt; or completion length. #15

Open
YZJ2023 opened this issue Feb 13, 2023 · 0 comments

Comments

@YZJ2023
Copy link

YZJ2023 commented Feb 13, 2023

你的 输入:
怎么设置你的Stop sequences
ChatGPT 输出:
Stop sequences 可以通过在GPT-3语言模型中设置一个特定的字符串来实现。这个字符串可以是一个简单的字符串,也可以是一个更复杂的正则表达式。当GPT-3模型遇到这个字符串时,它会停止生成文本。

你的 输入:
geiwoyigelizi
Traceback (most recent call last):
File "app.py", line 5, in
run()
File "/data/EASYChatGPT/bbot.py", line 24, in run
out = chatbot.ask(input_text)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/revChatGPT/Official.py", line 50, in ask
stop=["\n\n\n"],
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 160, in create
request_timeout=request_timeout,
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 623, in _interpret_response
stream=False,
File "/home/user/anaconda3/envs/EASYChatGPT/lib/python3.7/site-packages/openai/api_requestor.py", line 680, in _interpret_response_line
rbody, rcode, resp.data, rheaders, stream_error=stream_error
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4153 tokens (233 in your prompt; 3920 for the completion). Please reduce your prompt; or completion length.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant