You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Other benchmarks, where the difference is minimal, and highly relies on the specific Ruby implementation and version, and where the slow and fast variants might switch regularly.
I think the second category deserves a clear warning that those results were measured on some version of CRuby and might not apply anymore, and likely do not apply to other Ruby implementations.
For fun, @gogainda ran these benchmarks on TruffleRuby at https://github.com/gogainda/fast-truffleruby
What I can see from a quick look is many of the differences on MRI don't exist on TruffleRuby (e.g., Sequential vs Parallel Assignment).
Also, many of these micro benchmarks optimize away (>1 billion i/s), i.e., in other words doing that operation alone costs basically nothing or like <10 cycles, which I interpret as a useful word of caution against microbenchmarks which might test something real code wouldn't, and might show differences that don't matter in practice.
I'd recommend in general to benchmark in the setup of your app/program, on the machine where the performance will matter. For example, a variant might give be 25% faster in a microbenchmark, but yield a 0% speedup on the full app and therefore be of limited value.
The text was updated successfully, but these errors were encountered:
Hello there,
I think it would be worthwhile to separate the example in two categories:
I think the second category deserves a clear warning that those results were measured on some version of CRuby and might not apply anymore, and likely do not apply to other Ruby implementations.
For fun, @gogainda ran these benchmarks on TruffleRuby at https://github.com/gogainda/fast-truffleruby
What I can see from a quick look is many of the differences on MRI don't exist on TruffleRuby (e.g., Sequential vs Parallel Assignment).
Also, many of these micro benchmarks optimize away (>1 billion i/s), i.e., in other words doing that operation alone costs basically nothing or like <10 cycles, which I interpret as a useful word of caution against microbenchmarks which might test something real code wouldn't, and might show differences that don't matter in practice.
I'd recommend in general to benchmark in the setup of your app/program, on the machine where the performance will matter. For example, a variant might give be 25% faster in a microbenchmark, but yield a 0% speedup on the full app and therefore be of limited value.
The text was updated successfully, but these errors were encountered: