Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle throttle return codes from send APIs #55

Merged
merged 9 commits into from
Apr 29, 2024

Conversation

Meorawr
Copy link
Member

@Meorawr Meorawr commented Apr 29, 2024

Patch 4.4.0 and 10.2.7 are "improving" SendAddonMessage and its logged cousin by making it return a bit more information on failure in the form of an enum, rather than a boolean.

This means that we can now deal with cases where the client is actively throttling comms on a specific prefix. The problem however, is that with this implementation comes a separate change that now means that all traffic is subject to the 10-message-slots-per-prefix system that had been applied to party and raid comms in recent patches.

This means that in cases where there's few addons actually sending data, the default tuning of Chomp's BPS and BURST values is stratospherically high and will result in messages being dropped by the API - so we need to handle the throttling.

Using the result, we'll check if the client has blocked a message due to the per-prefix or per-channel throttle, and if it is we'll requeue it and redistribute bandwidth to the remainder of the messages in the same priority.

There is a bit of gnarly logic here though - the function that sends data out only terminates its loop if there's no data left to send, or if there's not enough bytes left in the pool for us to send the remaining data with. This fights with the whole process of requeueing.

As such - when requeueing messages we put them back into the queue, redistribute the bandwidth, but we don't requeue the queue itself until we're outside of the loop, instead deferring that operation by placing it into a 'blocked queues' table.

We also now don't destroy queues before sending data, but wait until afterwards - this is required because we only want to clean up the 'priority.byName' queue reference only if a queue is in an empty and unblocked state post-send.

These changes should hopefully keep much of the logic in the successful-send and unrecoverable-failure cases the same.

Patch 4.4.0 and 10.2.7 are "improving" SendAddonMessage and its logged
cousin by making it return a bit more information on failure in the form
of an enum, rather than a boolean.

This means that we can now deal with cases where the client is actively
throttling comms on a specific prefix. The problem however, is that with
this implementation comes a separate change that now means that *all*
traffic is subject to the 10-message-slots-per-prefix system that had
been applied to party and raid comms in recent patches.

This means that in cases where there's few addons actually sending data,
the default tuning of Chomp's BPS and BURST values is stratospherically
high and will result in messages being dropped by the API - so we need
to handle the throttling.

Using the result, we'll check if the client has blocked a message due to
the per-prefix or per-channel throttle, and if it is we'll requeue it
and redistribute bandwidth to the remainder of the messages in the same
priority.

There is a bit of gnarly logic here though - the function that sends
data out relies only terminates its loop if there's no data left to
send, or if there's not enough bytes left in the pool for us to send the
remaining data with. This fights with the whole process of requeueing.

As such - when requeueing messages we put them back into the queue,
redistribute the bandwidth, but we don't requeue the queue itself until
we're outside of the loop, instead deferring that operation by placing
it into a 'blocked queues' table.

We also now don't destroy queues before sending data, but wait until
afterwards - this is required because we only want to clean up the
'priority.byName' queue reference only if a queue is in an empty and
unblocked state post-send.

These changes should hopefully keep much of the logic in the
successful-send and unrecoverable-failure cases the same.
@Meorawr Meorawr added the enhancement New feature or request label Apr 29, 2024
@Meorawr Meorawr requested a review from Solanya April 29, 2024 12:29
@Meorawr Meorawr self-assigned this Apr 29, 2024
Meorawr added 8 commits April 29, 2024 13:32
The immediate send functionality relied on the now-removed hasQueue
boolean. Instead of readding it though, we'll just make it accessible
via a function call.
These need to be accessible elsewhere as well for the immediate-send
logic.
There's a few naked calls to SendAddonMessage APIs that occur if there's
no queued data. These could potentially return throttle error codes, so
we need to handle those.

If these calls indicate that the data the user wants to send is
throttled, we won't return early and will instead trigger the queue
logic.

Note also that this code always seems to invoke the callback with a
true 'didSend' parameter and never bothered to check for a false return
previously. Won't rock the boat and fix this now though.
The despooling logic is generic over which send function it executes -
it simply calls a queued 'f' and unpacks a bunch of parameters into it.

Not all 'f's are the same though when it comes to return values, and it
may be the case that one f returns a boolean, another f returns a result
code, and another f returns nothing whatsoever.

In the nothing whatsoever case, select will explode due to there being
no results on the stack - so we'll handle that by assuming that any f
that returns absolutely nothing sent its data successfully.
@Solanya Solanya merged commit 39700b5 into main Apr 29, 2024
1 check passed
@Solanya Solanya deleted the feature/add-sam-result-support branch April 29, 2024 21:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants