Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarking suite with trackable performance #147

Merged
merged 17 commits into from
Dec 2, 2024

Conversation

raar1
Copy link
Collaborator

@raar1 raar1 commented Dec 1, 2024

Fixes #148

This PR adds a benchmarking suite with plots of performance changes over time (per commit hash) stored in the gh-pages branch using the https://github.com/benchmark-action/github-action-benchmark github action.

Currently, the benchmarks are in their own directory in GalerkinToolkitExamples. This is in order to benchmark example code (such as the hand-written Poisson example) which we will optimize first.

However, we may later choose to split off the benchmarks into their own subpackage if that is cleaner (or if we have benchmarks that are not related to the examples). We can decide together what we want to be in there.

The benchmark results are collected and stored in the gh-pages branch, in dev/bench/. You can see the generated graphs of performance changes here: https://galerkintoolkit.github.io/GalerkinToolkit.jl/dev/bench/

You might need to scroll down a bit to see the poisson-hand/n=10 benchmark which has more data. The others were earlier tests and only have one data point.

I have added some developer documentation about the benchmarking.

@raar1 raar1 changed the title Add benchmarking suite for the handwritten poisson example Add benchmarking suite with trackable performance Dec 1, 2024
@raar1 raar1 marked this pull request as ready for review December 1, 2024 23:50
@raar1 raar1 requested a review from fverdugo December 1, 2024 23:50
@fverdugo
Copy link
Collaborator

fverdugo commented Dec 2, 2024

Thanks @raar1. It looks good! Two minor questions:

  • Is there a way of removing "old benchmarks"?
  • Is there a way of using seconds instead of nanoseconds on the y-axis?

@raar1
Copy link
Collaborator Author

raar1 commented Dec 2, 2024

@fverdugo

Yes, good questions, I also wondered. It seems like the way to remove old benchmarks currently is simply to delete those entries from the latest dev/bench/data.js. Then after any subsequent performance tests those plots will be gone. (I tested it now, so the plot for only 1 benchmark remains). Perhaps it's not optimal, but there doesn't appear to be another way yet (the action seems to be under active development though).

Regarding the units, this is a bit tricky. BenchmarkTools.jl always stores times in ns internally, and when you save the results this is the unit that is used (does not appear to be configurable). Then the charts built by the github action simply plot with whatever units are in the results JSON. I think they handle units for other languages but the Julia handlers don't seem to do it. We could open an issue on their repo perhaps?

Copy link
Collaborator

@fverdugo fverdugo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @raar1 !

Let's merge this first version and improve later if needed.

@fverdugo fverdugo merged commit c6fa31e into main Dec 2, 2024
4 checks passed
@fverdugo
Copy link
Collaborator

fverdugo commented Dec 2, 2024

@raar1 For some reason I cannot longer see the performance plots in

https://galerkintoolkit.github.io/GalerkinToolkit.jl/dev/bench/

@raar1
Copy link
Collaborator Author

raar1 commented Dec 2, 2024

@fverdugo Ah... this was my worry before. It seems that building the docs (which only happens on merge) erases the bench directory (probably through force-push or something).

Let me look into whether we can use another branch. Alternatively, the action allows configuration to use another repository. So at worst we could make a performance_results repository that we use for storing these, and then we link to the graphs from the documentation as before.

First I will check if we can't just use another branch in the same repo or if gh-pages is hardcoded for some reason.

Seems others have had the same problem: benchmark-action/github-action-benchmark#85

@raar1
Copy link
Collaborator Author

raar1 commented Dec 2, 2024

OK. So in the documentation I see:

gh-repository

  • Type: String

Url to an optional different repository to store benchmark results (eg. github.com/benchmark-action/github-action-benchmark-results)

NOTE: if you want to auto push to a different repository you need to use a separate Personal Access Token that has a write access to the specified repository.
If you are not using the auto-push option then you can avoid passing the gh-token if your data repository is public

benchmark-data-dir-path (Required)

  • Type: String
  • Default: "dev/bench"

Path to a directory that contains benchmark files on the GitHub pages branch. For example, when this value
is set to "path/to/bench", https://you.github.io/repo-name/path/to/bench will be available as benchmarks
dashboard page. If it does not contain index.html, this action automatically generates it at first run.
The path can be relative to repository root.

So it looks like the current approach for dealing with this is that we configure the benchmarks to be written to the docs/ directory on the main branch (e.g. docs/bench), or alternatively we write them to their own top level directory (e.g. bench/). Do you have a preference @fverdugo ?

@fverdugo
Copy link
Collaborator

fverdugo commented Dec 2, 2024

docs/bench on main branch would be my preference, but is this going to work? Is the CI going to be triggered again when pushing to main from the benchmark CI action? I am afraid of some kind of infinite loop.

@raar1
Copy link
Collaborator Author

raar1 commented Dec 2, 2024

Yes, it's indeed a bit unclean. I propose we instead make a benchmarks branch (still in GalerkinToolkit repo) and configure all benchmarks results to go there.

I can do that by configuring benchmark-data-dir-path.

@Cmurilochem Cmurilochem mentioned this pull request Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add performance tests with (visual) tracking
2 participants