Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tweaks to benchmark client configuration #259

Merged
merged 1 commit into from
Mar 10, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion cmd/river/riverbench/river_bench.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,9 +88,23 @@ func (b *Benchmarker[TTx]) Run(ctx context.Context) error {
river.AddWorker(workers, &BenchmarkWorker{})

client, err := river.NewClient(b.driver, &river.Config{
// When benchmarking to maximize job throughput these numbers have an
// outside effect on results. The ones chosen here could possibly be
// optimized further, but based on my tests of throwing a lot of random
// values against the wall, they perform quite well. Much better than
// the client's default values at any rate.
FetchCooldown: 2 * time.Millisecond,
FetchPollInterval: 5 * time.Millisecond,

Logger: slog.New(slog.NewTextHandler(os.Stdout, &slog.HandlerOptions{Level: slog.LevelWarn})),
Queues: map[string]river.QueueConfig{
river.QueueDefault: {MaxWorkers: river.QueueNumWorkersMax},
// This could probably use more refinement, but in my quick and
// dirty tests I found that roughly 1k workers was most optimal. 500
// and 2,000 performed a little more poorly, and jumping up to the
// 10k performed considerably less well (scheduler contention?).
// There may be a more optimal number than 1,000, but it seems close
// enough to target for now.
river.QueueDefault: {MaxWorkers: 1_000},
},
Workers: workers,
})
Expand Down
Loading