Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Describe your changes
This is a second attempt at adding a test coverage report, follow up to #225
The problem with the previous report is that it relied on a bot comments on PRs, however permission for this is disabled on project-kessel repos. Instead, the comment is skipped and a markdown summary report is added to the build and test job. An example in my fork here.
I experiment with several different similar github actions, unfortunately the vast majority require write access to the repo in order to surface stats. This approach was the best low impact one I could come up with.
The html report is not used by the CI job and is just added for local testing convenience.
Important!
This change needs to merge in two stages:
The second stage will fail, if the first stage has not already been merged and run on main branch. I will create a second PR to uncomment and enable the coverage summary report after this PR merges.
Example:
Ticket reference (if applicable)
Fixes #RHCLOUD-35837
Checklist
Are the agreed upon acceptance criteria fulfilled?
Was the 4-eye-principle applied? (async PR review, pairing, ensembling)
Do your changes have passing automated tests and sufficient observability?
Are the work steps you introduced repeatable by others, either through automation or documentation?
The Changes were automatically built, tested, and - if needed, behind a feature flag - deployed to our production environment. (Please check this when the new deployment is done and you could verify it.)
Are the agreed upon coding/architectural practices applied?
Are security needs fullfilled? (e.g. no internal URL)
Is the corresponding Ticket in the right state? (should be on "review" now, put to done when this change made it to production)
For changes to the public API / code dependencies: Was the whole team (or a sufficient amount of ppl) able to review?