-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle throttle return codes from send APIs #55
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Patch 4.4.0 and 10.2.7 are "improving" SendAddonMessage and its logged cousin by making it return a bit more information on failure in the form of an enum, rather than a boolean. This means that we can now deal with cases where the client is actively throttling comms on a specific prefix. The problem however, is that with this implementation comes a separate change that now means that *all* traffic is subject to the 10-message-slots-per-prefix system that had been applied to party and raid comms in recent patches. This means that in cases where there's few addons actually sending data, the default tuning of Chomp's BPS and BURST values is stratospherically high and will result in messages being dropped by the API - so we need to handle the throttling. Using the result, we'll check if the client has blocked a message due to the per-prefix or per-channel throttle, and if it is we'll requeue it and redistribute bandwidth to the remainder of the messages in the same priority. There is a bit of gnarly logic here though - the function that sends data out relies only terminates its loop if there's no data left to send, or if there's not enough bytes left in the pool for us to send the remaining data with. This fights with the whole process of requeueing. As such - when requeueing messages we put them back into the queue, redistribute the bandwidth, but we don't requeue the queue itself until we're outside of the loop, instead deferring that operation by placing it into a 'blocked queues' table. We also now don't destroy queues before sending data, but wait until afterwards - this is required because we only want to clean up the 'priority.byName' queue reference only if a queue is in an empty and unblocked state post-send. These changes should hopefully keep much of the logic in the successful-send and unrecoverable-failure cases the same.
The immediate send functionality relied on the now-removed hasQueue boolean. Instead of readding it though, we'll just make it accessible via a function call.
These need to be accessible elsewhere as well for the immediate-send logic.
There's a few naked calls to SendAddonMessage APIs that occur if there's no queued data. These could potentially return throttle error codes, so we need to handle those. If these calls indicate that the data the user wants to send is throttled, we won't return early and will instead trigger the queue logic. Note also that this code always seems to invoke the callback with a true 'didSend' parameter and never bothered to check for a false return previously. Won't rock the boat and fix this now though.
The despooling logic is generic over which send function it executes - it simply calls a queued 'f' and unpacks a bunch of parameters into it. Not all 'f's are the same though when it comes to return values, and it may be the case that one f returns a boolean, another f returns a result code, and another f returns nothing whatsoever. In the nothing whatsoever case, select will explode due to there being no results on the stack - so we'll handle that by assuming that any f that returns absolutely nothing sent its data successfully.
Solanya
approved these changes
Apr 29, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Patch 4.4.0 and 10.2.7 are "improving" SendAddonMessage and its logged cousin by making it return a bit more information on failure in the form of an enum, rather than a boolean.
This means that we can now deal with cases where the client is actively throttling comms on a specific prefix. The problem however, is that with this implementation comes a separate change that now means that all traffic is subject to the 10-message-slots-per-prefix system that had been applied to party and raid comms in recent patches.
This means that in cases where there's few addons actually sending data, the default tuning of Chomp's BPS and BURST values is stratospherically high and will result in messages being dropped by the API - so we need to handle the throttling.
Using the result, we'll check if the client has blocked a message due to the per-prefix or per-channel throttle, and if it is we'll requeue it and redistribute bandwidth to the remainder of the messages in the same priority.
There is a bit of gnarly logic here though - the function that sends data out only terminates its loop if there's no data left to send, or if there's not enough bytes left in the pool for us to send the remaining data with. This fights with the whole process of requeueing.
As such - when requeueing messages we put them back into the queue, redistribute the bandwidth, but we don't requeue the queue itself until we're outside of the loop, instead deferring that operation by placing it into a 'blocked queues' table.
We also now don't destroy queues before sending data, but wait until afterwards - this is required because we only want to clean up the 'priority.byName' queue reference only if a queue is in an empty and unblocked state post-send.
These changes should hopefully keep much of the logic in the successful-send and unrecoverable-failure cases the same.