Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmarking suite with trackable performance #147

Merged
merged 17 commits into from
Dec 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
name: Performance Regression Tests
on:
push:
branches:
- main

permissions:
contents: write
deployments: write

jobs:
benchmark:
name: Run julia benchmark example
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
version:
- '1.10'
os:
- ubuntu-latest
arch:
- x64
steps:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@v1
with:
version: ${{ matrix.version }}
arch: ${{ matrix.arch }}
- uses: actions/cache@v4
env:
cache-name: cache-artifacts
with:
path: ~/.julia/artifacts
key: runner.os−test−env.cache−name−{{ hashFiles('**/Project.toml') }}
restore-keys: |
runner.os−test−
${{ env.cache-name }}-
${{ runner.os }}-test-
${{ runner.os }}-
- name: Run benchmark
run: |
julia --project=GalerkinToolkitExamples -e '
using Pkg
Pkg.develop(path=".")
include("GalerkinToolkitExamples/benchmarks/run_benchmarks.jl")'

- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Julia benchmark result
tool: 'julia'
output-file-path: output.json
# Use personal access token instead of GITHUB_TOKEN due to https://github.community/t/github-action-not-triggering-gh-pages-upon-push/16096
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
# Show alert with commit comment on detecting possible performance regression
alert-threshold: '200%'
comment-on-alert: true
fail-on-alert: true
1 change: 1 addition & 0 deletions GalerkinToolkitExamples/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
WriteVTK = "64499a7a-5c06-52f2-abe2-ccb03c286192"
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"

[extras]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
Expand Down
41 changes: 41 additions & 0 deletions GalerkinToolkitExamples/benchmarks/run_benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
module GalerkinToolkitBenchmarkTests

using BenchmarkTools

import GalerkinToolkit as GT
using GalerkinToolkitExamples: Poisson


function handwritten_poisson(n)
"""
Runs the hand-written Poisson example code, for a 3D
mesh of dimensions n x n x n.
"""

mesh = GT.cartesian_mesh((0,2,0,2,0,2), (n,n,n))

params = Dict{Symbol,Any}()
params[:implementation] = :hand_written
params[:mesh] = mesh
params[:dirichlet_tags] = ["1-face-1","1-face-2","1-face-3","1-face-4"]
params[:discretization_method] = :continuous_galerkin
params[:dirichlet_method] = :strong
params[:integration_degree] = 2

Poisson.main(params)
end


# Build a benchmark suite for the Poisson example
suite = BenchmarkGroup()
suite["poisson-hand"] = BenchmarkGroup(["Poisson", "handwritten", "3D"])
suite["poisson-hand"]["n=10"] = @benchmarkable handwritten_poisson(10)

# Run all benchmarks
tune!(suite)
results = run(suite, verbose = true)

# Save benchmark results for tracking and visualization
BenchmarkTools.save("output.json", median(results))

end # module
7 changes: 7 additions & 0 deletions docs/src/developers_guide.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
# Developers guide

## Performance Benchmarks
There is a benchmark suite defined in `GalerkinToolkitExamples/benchmarks`. This uses [BenchmarkTools.jl](https://github.com/JuliaCI/BenchmarkTools.jl) to perform the timings
and [github-action-benchmark](https://github.com/benchmark-action/github-action-benchmark) to collect the results and store them in the `gh-pages` branch. Graphs of performance
changes over time (per commit hash) can be viewed here: <https://galerkintoolkit.github.io/GalerkinToolkit.jl/dev/bench/>.

The github action can be configured (in `.github/workflows/benchmarks.yml`) to fail if the performance change is beyond a given threshold. Look for the `alert-threshold:` and `fail-on-alert:` keys.

More benchmarks can be added (or existing ones modified) in `GalerkinToolkitExamples/benchmarks/run_benchmarks.jl`.
Loading