You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We occasionally encounter errors under heavy load where all 5 tries are exhausted:
com/microsoft/azure/datalake/store/ADLFileInputStream.read:com.microsoft.azure.datalake.store.ADLException: Error reading from file [filename]
Operation OPEN failed with HTTP429 : ThrottledException
Last encountered exception thrown after 5 tries. [HTTP429(ThrottledException),HTTP429(ThrottledException),HTTP429(ThrottledExceptio\
n),HTTP429(ThrottledException),HTTP429(ThrottledException)]
[ServerRequestId: redacted]
com.microsoft.azure.datalake.store.ADLStoreClient.getExceptionFromResponse(ADLStoreClient.java:1179)
com.microsoft.azure.datalake.store.ADLFileInputStream.readRemote(ADLFileInputStream.java:252) com.microsoft.azure.datalake.store.ADLFileInputStream.readInternal(ADLFileInputStream.java:221) com.microsoft.azure.datalake.store.ADLFileInputStream.readFromService(ADLFileInputStream.java:132)
com.microsoft.azure.datalake.store.ADLFileInputStream.read(ADLFileInputStream.java:101)
The readRemote() method uses the default ctor for new ExponentialBackoffPolicy() and there doesn't seem to be any way to specify more retries or a steeper backoff. In our use case, we have tasks in Hadoop running in parallel, and they apparently overwhelm the default backoff strategy.
The text was updated successfully, but these errors were encountered:
wheezil
changed the title
The retry count of ExponentialBackoffPolicy is not settable
The retry count of ExponentialBackoffPolicy created by ADLFileInputStream is not configurable
Sep 19, 2019
We occasionally encounter errors under heavy load where all 5 tries are exhausted:
The readRemote() method uses the default ctor for new ExponentialBackoffPolicy() and there doesn't seem to be any way to specify more retries or a steeper backoff. In our use case, we have tasks in Hadoop running in parallel, and they apparently overwhelm the default backoff strategy.
The text was updated successfully, but these errors were encountered: