Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: replica selector #692

Merged
merged 9 commits into from
Dec 13, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 36 additions & 2 deletions cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ import (
var ErrNoSlot = errors.New("the slot has no redis node")
var ErrReplicaOnlyConflict = errors.New("ReplicaOnly conflicts with SendToReplicas option")
var ErrInvalidShardsRefreshInterval = errors.New("ShardsRefreshInterval must be greater than or equal to 0")
var ErrReplicaOnlyConflictWithReaderNodeSelector = errors.New("ReplicaOnly conflicts with ReaderNodeSelector option")
var ErrSendToReplicasConflictWithReaderNodeSelector = errors.New("SendToReplicas conflicts with ReaderNodeSelector option")

type clusterClient struct {
pslots [16384]conn
Expand All @@ -41,6 +43,14 @@ type connrole struct {
replica bool
}

var sendToReader = func(cmd Completed) bool {
return cmd.IsReadOnly()
}

var replicaOnlySelector = func(_ uint16, replicas []ReplicaInfo) int {
return util.FastRand(len(replicas))
}

func newClusterClient(opt *ClientOption, connFn connFn, retryer retryHandler) (*clusterClient, error) {
client := &clusterClient{
cmd: cmds.NewBuilder(cmds.InitSlot),
Expand All @@ -55,8 +65,20 @@ func newClusterClient(opt *ClientOption, connFn connFn, retryer retryHandler) (*
if opt.ReplicaOnly && opt.SendToReplicas != nil {
return nil, ErrReplicaOnlyConflict
}
if opt.ReplicaOnly && opt.ReaderNodeSelector != nil {
return nil, ErrReplicaOnlyConflictWithReaderNodeSelector
}
if opt.SendToReplicas != nil && opt.ReaderNodeSelector != nil {
return nil, ErrSendToReplicasConflictWithReaderNodeSelector
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel that we should not hide SendToReplicas when setting ReaderNodeSelector. That looks too implicit. Actually, I think we should ask users to provide SendToReplicas when ReaderNodeSelector is set:

Instead of simply (ReaderNodeSelector alone is hard for me to reason what it actually does):

	client, err := rueidis.NewClient(rueidis.ClientOption{
		ReaderNodeSelector: func(slot uint16, replicas []rueidis.ReplicaInfo) int {
			return rand.IntN(len(replicas))
		},
	})

I prefer making it more explicitly state:

	client, err := rueidis.NewClient(rueidis.ClientOption{
		SendToReplicas: func(cmd rueidis.Completed) bool {
			return cmd.IsReadOnly()
		},
		ReplicaSelector: func(slot uint16, replicas []rueidis.ReplicaInfo) int {
			return rand.IntN(len(replicas))
		},
	})

I believe the latter provides users with a better sense of control over which replicas to execute which commands.

Also, I think changing the name of ReaderNodeSelector to ReplicaSelector is more consistent with the existing naming.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, Using SendToReplicas and ReplicaSelector both makes sense to me. Even though user can select primary using ReplicaSelector, It is configured explicitly, so it is clear to me.
90387d2

}

if opt.SendToReplicas != nil {
opt.ReaderNodeSelector = replicaOnlySelector
} else if opt.ReaderNodeSelector != nil {
opt.SendToReplicas = sendToReader
}

if opt.SendToReplicas != nil || opt.ReaderNodeSelector != nil {
rOpt := *opt
rOpt.ReplicaOnly = true
client.rOpt = &rOpt
Expand Down Expand Up @@ -236,15 +258,27 @@ func (c *clusterClient) _refresh() (err error) {
pslots[i] = conns[g.nodes[1+util.FastRand(nodesCount-1)]].conn
}
}
case c.rOpt != nil: // implies c.opt.SendToReplicas != nil
case c.rOpt != nil:
if len(rslots) == 0 { // lazy init
rslots = make([]conn, 16384)
}
if len(g.nodes) > 1 {
n := len(g.nodes) - 1
replicas := make([]ReplicaInfo, 0, n)
for _, addr := range g.nodes[1:] {
Copy link
Collaborator

@rueian rueian Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid this allocation by replacing g.nodes from []string to []ReplicaInfo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I already consider it. But i didn't. Because

  1. g.nodes has primary node too. so ReplicaInfo slice type is incorrect. It can be misleading to contributors.
  2. user can change value each ReplicaInfo. Changing ReplicaInfo is not problem to rueidis. but there can be side effect in the user side.

Because of duplicated allocation, i also considering reusing ReplicaInfo slice. But problem is If user holds ReplicaInfo slice in the ReplicaSelector it can be problem.

I prefer reusing ReplicaInfo slice if should reduce allocation. And in this case, we should add comment that user must not store reference ReplicaInfo slice.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @proost,

Great considerations!

  1. If naming is the concern, we can use some sort of the type alias. We can only expose ReplicaInfo to users but use another name internally.
  2. We can add comments on the selector function to tell the user should not change values in the slice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

50fad71

OK, I changed it.

replicas = append(replicas, ReplicaInfo{Addr: addr})
}

for _, slot := range g.slots {
for i := slot[0]; i <= slot[1]; i++ {
pslots[i] = conns[master].conn
rslots[i] = conns[g.nodes[1+util.FastRand(len(g.nodes)-1)]].conn

rIndex := c.opt.ReaderNodeSelector(uint16(i), replicas)
if rIndex >= 0 && rIndex < n {
rslots[i] = conns[g.nodes[1+rIndex]].conn
} else {
rslots[i] = conns[master].conn
}
}
}
} else {
Expand Down
Loading
Loading