You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fetcher holds for every fetch queue a counter which counts the number of observed "exceptions" seen when fetching from the host (resp. domain or IP) bound to this queue.
As an improvement to increase the politeness of the crawler, the counter value could be used to dynamically increase the fetch delay for hosts where requests fail repeatedly with exceptions or HTTP status codes mapped to ProtocolStatus.EXCEPTION (HTTP 403 Forbidden, 429 Too many requests, 5xx server errors, etc.) Of course, this should be optional. The aim to reduce the load on such hosts already before the configured max. number of exceptions (property fetcher.max.exceptions.per.queue) is hit.
The text was updated successfully, but these errors were encountered:
Instead of delaying, which would increase latency, trigger timeouts and fail the tuples. It would be better to assume Fetch errors for the URLs in the queue and push them straight to status.
An even better approach would be to have #867 and send data at the queue level so that URLs from it are held for a while. URLFrontier would be a good match for that.
See NUTCH-2946
The text was updated successfully, but these errors were encountered: