Replies: 2 comments 2 replies
-
Yes, benchmarks would certainly be nice. Either using the built-in To focus the benchmarks I would suggest we start by collecting some typical HTML documents and putting them into version control as a stable base for comparisons. The we should probably benchmark at least I am not sure about running them in a noisy environment like GitHub Actions but just having them in the repository and ready to run should make this step trivial in any case. |
Beta Was this translation helpful? Give feedback.
-
To be honest, I think we should start simpler: With micro benchmarks targeting the algorithmic efficiency of the crate. Large scale benchmarks involving tasks/threads are significantly harder to make reproducible and consistent.
I am sceptical to make this purely computational code async as this is generally only worsens its throughput. If this taking too long and blocks your worker threads from handling incoming traffic, maybe putting the parsing into |
Beta Was this translation helpful? Give feedback.
-
@j-mendez and I have had a discussion about what should be done to improve processing speeds for the select operation.
We think that, as we are aiming for maximum performance, the following could be useful:
@j-mendez also proposed testing some variations of our code he's been using in production since they made a difference for intense workloads.
Also, huge thanks to @adamreichold!
Beta Was this translation helpful? Give feedback.
All reactions