Skip to content

Commit

Permalink
[finagle-memcached] apply offloading of responses after gathering res…
Browse files Browse the repository at this point in the history
…ponse parts

# Problem
Finagle offloads responses from clients ASAP to free up netty threads.
If a multikey request to memcached is partitioned across memcached endpoints, each response of such subrequest is offloaded.
This means additional inter-thread synchonisation and even additional delay while subresponses are waiting in the offload queue.

# Solution
Offload responses after all subresponses are collected.

JIRA Issues: STOR-8861

Differential Revision: https://phabricator.twitter.biz/D1184723
  • Loading branch information
Anton Ivanov authored and jenkins committed Nov 22, 2024
1 parent d6872e1 commit 744e0ae
Showing 1 changed file with 7 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import com.twitter.hashing
import com.twitter.finagle.client._
import com.twitter.finagle.dispatch.SerialServerDispatcher
import com.twitter.finagle.dispatch.StalledPipelineTimeout
import com.twitter.finagle.filter.OffloadFilter
import com.twitter.finagle.liveness.FailureAccrualFactory
import com.twitter.finagle.liveness.FailureAccrualPolicy
import com.twitter.finagle.loadbalancer.Balancers
Expand Down Expand Up @@ -268,6 +269,12 @@ object Memcached extends finagle.Client[Command, Response] with finagle.Server[C
BindingFactory.role,
MemcachedPartitioningService.module
)
// We want offloading to happen after partitioning, i.e. after all responses are collected
// to reduce pressure on offload pool
.remove(OffloadFilter.Role)
.insertBefore(
MemcachedPartitioningService.role,
OffloadFilter.client[Command, Response])
// We want this to go after the MemcachedPartitioningService so that we can get individual
// spans for fanout requests. It's currently at protoTracing, so we remove it to re-add below
.remove(MemcachedTracingFilter.memcachedTracingModule.role)
Expand Down

0 comments on commit 744e0ae

Please sign in to comment.