-
-
Notifications
You must be signed in to change notification settings - Fork 947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add basic performance testing to the gate #1450
Comments
Just a side note: I think this is both a great initiative 👍 , and at the same time a tricky task as Travis instances are known for rather wild fluctuations in throughput. (See also travis-ci/travis-ci#352 -- not sure if the problem is still relevant to the same extent as of today as the thread is a bit old; the thread discussion has some interesting ideas btw) |
That's true. We'd probably have to spin up our own boxes. |
@njsmith has kindly pointed me to this article https://pythonspeed.com/articles/consistent-benchmarking-in-ci/ . We could establish a handful of basic metrics with a relatively high precision, and add one or more CI gates to track them; using either sheer instruction counts as reported by We could probably start with CPython and CPython+Cython checks. PyPy JIT was not fluctuating that much as I feared either, but our microoptimizations are largely geared towards CPython anyway. PyPy checks could be added as a further improvement. |
The sqlalchemy tests contains something along these lines, focused only on function call counts. I can point to the relevant bits if it may be useful |
Create a basic performance test based on historical trends and add it to as a Travis job. We should consider failing the build in the face of a performance regression.
Our first pass at this can be very basic; we can always improve it over time.
The text was updated successfully, but these errors were encountered: