Allow BatchSpanProcessor to send early when a full batch is ready #4164
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Which problem is this PR solving?
The
BatchSpanProcessor
waitsscheduledDelayMillis
(5000 by default) since the arrival of the first span, or since the last export before exporting. It never exports more than one batch ofmaxExportBatchSize
(512 by default). If there's more than 512 spans produced per 5000 seconds the surplus starts building up in the queue, until it overflows and starts dropping spans. Which is observable though these logs:Dropped 2576 spans because maxQueueSize reached
Short description of the changes
In comparision Java's
BatchSpanProcessor
also uses a configured delay and batch size, but will send the batch as soon as sufficient number of spans is enqueued (https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk/trace/src/main/java/io/opentelemetry/sdk/trace/export/BatchSpanProcessor.java#L244). Therefore the delay doesn't limit the maximum throughput.This PR implements similar logic in JS's BatchSpanProcessor.
Type of change
How Has This Been Tested?
Checklist: