You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.
The ability to send messages in batches was introduced in #14, which utilises the batch sending implementation of each messaging provider. Currently, Service Bus has a message size limit of 256KB for Standard tier Service Bus Namespaces and this limit is also applied to the size of a batch that is sent. The generic EnqueueAsync methods serialise content into byte[] payloads in order to initialise QueueMessages. This means that the size of the content of each message can be calculated, and thus the size of a batch can be calculated.
The proposed solution to this problem is to internally process batch send requests. Messages can be handled individually when adding them to a batch - with direct access to the size of each message - and once a batch has reached the size limit, messages can be added to a new batch. All batches can then be sent to the selected messaging provider internally, granting users safety when calling batch sending endpoints.
The text was updated successfully, but these errors were encountered:
We do need to be mindful of any changes in behaviour if we introduce client-side batching of messages. Currently, when sending a batch of messages to Service Bus, either the entire batch of messages is en-queued, or none are. If we move to processing batches in the client, we might introduce issues where one batch is sent successfully, but then the next batch fails to be sent. We'll need to carefully consider how to, and if we want to, avoid this situation.
The current implementation will throw if a batch is too large. I think that is a safe default implementation (we put the responsibility onto users of our library to construct messages that meet the requirements of the messaging implementation that they are using). However, a downside of the current implementation is the differences between queue implementations (which our abstraction is supposed to be hiding as far as possible). At the very least, we should be documenting this difference.
The ability to send messages in batches was introduced in #14, which utilises the batch sending implementation of each messaging provider. Currently, Service Bus has a message size limit of 256KB for Standard tier Service Bus Namespaces and this limit is also applied to the size of a batch that is sent. The generic
EnqueueAsync
methods serialise content intobyte[]
payloads in order to initialiseQueueMessage
s. This means that the size of the content of each message can be calculated, and thus the size of a batch can be calculated.The proposed solution to this problem is to internally process batch send requests. Messages can be handled individually when adding them to a batch - with direct access to the size of each message - and once a batch has reached the size limit, messages can be added to a new batch. All batches can then be sent to the selected messaging provider internally, granting users safety when calling batch sending endpoints.
The text was updated successfully, but these errors were encountered: