-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Moved benchmarking project go Github Actions #2129
Conversation
As discussed with @nirinchev, it would also be possible to make it run on PR, but there is no way of constraining it on the PR labels. We can still do this check inside the workflow, but the workflow will run nevertheless, and we can make it fail, or in the best case cancel. |
.github/workflows/benchmark-test.yml
Outdated
with: | ||
tool: 'benchmarkdotnet' | ||
# Where the output from the benchmark tool is stored | ||
output-file-path: .\BenchmarkDotNet.Artifacts\results\PerformanceTests.QueryTests-report-full-compressed.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this include the other tests as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it should, I was testing on a subset and I forgot :)
Tests/PerformanceTests/Program.cs
Outdated
@@ -49,33 +41,9 @@ public static void Main(string[] args) | |||
.WithArtifactsPath(defaultConfig.ArtifactsPath) | |||
.AddDiagnoser(MemoryDiagnoser.Default) | |||
.AddJob(Job.ShortRun.WithToolchain(InProcessEmitToolchain.Instance)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: check if these parameters make sense - I used them to get quick results locally, but maybe on CI we'd want a default/long job. The toolchain is also something I used as a bit of a workaround and we may want to adjust later. The problem that I hit was that it looked like the weaver is not running for the generated assembly. I'm guessing there are ways around that, but I didn't feel investigating is justified. We can keep it for now and revisit in another PR.
@nirinchev Regarding making the graphs for the different test sets, it's not possible at the moment with that action. Only one file can be specified as the benchmark output, and if I try to run that step multiple times with different |
Wouldn't using |
You're correct, I didn't find it in the documentation for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good; we should remove the jenkins job at some point, but let's keep it for now just to be able to easily compare results.
@nirinchev regarding the Jenkins job, I removed the exporter for that from the benchmark. Do you think I should put it back to keep both? |
Hm, good point - I guess that it will now fail, so let's remove it. |
Co-authored-by: Nikola Irinchev <[email protected]>
Description
Moved benchmarking project to Github Actions. The workflow takes care of building the wrappers, and cache the results to speed up subsequent runs. In case the wrappers have been modified, those are built again.
For now the benchmarks run only on Windows, and a series of graphs is shown on Github Pages.
Fixes #2119