-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeout was reached #213
Comments
This serves as a workaround:
|
Do you have an example of what size prompt + what provider causes this problem? |
I noticed this issue with OpenAI API. However, it is not easy to replicate as it has transient character, although it happens regularly. But being able to set custom / increase timeout limits in httr2 usually solve those problems. |
I have the same issue with OpenAI API. It is happening very randomly. Not yet able to reproduce it. |
I don't think this really helps. But this is the error I am running into every 5th or 6h call of OpenAI.
|
It's weird that I've never seen this issue — I've seen a ton of other random problems because the elmer checks run pretty frequently, but not this one. |
It is also weird that it never happen with official Python api. It is quite hard to replicate, but if you try long and complex context (e g. Instruction for the screening phase of a systematic review) and run it many times, maybe in parallel (e.g. on some hundreds or thousands of abstracts) it will happen quite always. Maybe if you repeat the query on the errored records only, sometimes you solved sometimes the error remains there. Anyway it never happen using Python api (not a single time in my experience, as I mentioned in irudnyts/openai#61 (comment)), which lead me to add an option for that in gpteasyr (https://github.com/CorradoLanera/gpteasyr?tab=readme-ov-file#pythons-backend). Hack timeouts (one or more of the many ones) sometimes solves the issue, but never consistently... sometimes the only solution I found was switch to the Python backend (e g , w/ reticulate), which btw always solves the issue immediately 🤔🤷 |
When the prompt and output are large and LLM needs more time to generate the response the
$chat()
method is prone to result inIt would be good to have some way to set / increase
httr2::req_timeout()
inchat_perform_value
https://github.com/tidyverse/elmer/blob/47ef8ecbb562fa208ba59176794bdaf58273a429/R/httr2.R#L31The text was updated successfully, but these errors were encountered: