-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DirectSender.Flush() not performing a full flush #25
Comments
I would like to go ahead and make this change. I am thinking:
|
additional thoughts from talking with @priyaselvaganesan and looking at the equivalent code in the Java SDK. It looks like the We wonder about the safety of performing a full flush. What if the SDK is sending to a proxy that is already overwhelmed? Will the proxy throttle those requests? Will it create backpressure? Would we want flush to block indefinitely if the proxy is making us wait? Or would this not happen because the SDK buffer can only be so big anyway? What if the flush takes a really long time? Should there be a timeout? Thinking about renaming |
Another thought: I'm pretty sure Maybe failed lines need a retry limit. Maybe |
When DirectSender.Flush() is called, it calls
Flush()
on the underlyingLineHandlers
.LineHandler.Flush()
only performs a flush of a single batch, leaving the rest of the buffer in memory. Typically,Flush()
on something means that you'd block until all data is written to some backend storage. I suggest we change the code to callFlushAll()
on the LineHandlers instead, since this would be more in line with typical semantics of aFlush()
call.If we are worried about flooding Wavefront with points, we may want to offer a
FlushPartial()
orFlushCurrentBatch()
, but, in my opinion, it's confusing to haveFlush()
do something that's not inline with what programmers typically expect.Even if we don't change the behavior of
Flush()
the senders need to offer aFlushAll()
. This is crucial if you have a data collector that doesn't run continuously, e.g. something that's started from cron, does its job and then dies. Such a program needs a way to ensure that everything is written to the backend before it exits.The text was updated successfully, but these errors were encountered: