-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration tests seem to be non-deterministic and regularly fail on main #1108
Comments
#1106 #### Description Pin python kubernetes version to fix recent breakage in jenkins tests. The latest update to the python kubernetes library (v31, 3 days ago) breaks the Jenkins `github-check-merge-juju-python-libjuju` test due to failure to build a new dependency (durationpy). I thought this might be the fix for issue #1088, but that's been open since August 9. Should we switch to stricter dependency versioning across the board here to avoid breakages of this nature? In setup.py, 4 dependencies now specify both minimum and maximum versions, 5 only specify a minimum, and 2 have no version specification. In tox.ini, only 1 dependency (kubernetes) specifies a (maximum) version. tox.ini should probably have the same version constraints as setup.py. #### QA Steps All tests pass, except for integration tests, which are flaky (see issue #1108).
…remove-eol-schema #1113 Remove `3.2.X` schemas. Delete `_client*.py` and run `make client`. #### Description Moving towards the goal of having schemas and generated code in python-libjuju only for the latest supported Juju versions (`3.1.9`, `3.3.6`, `3.5.4`, `3.5.3`), and using client only schemas (see #1099), this PR removes schemas for EOL Juju 3.2, and reruns code generations, removing `_client*.py` files and then running `make client`. #### QA Steps CI steps should all continue to pass, except for integration testing which should continue to fail with the usual suspects (see #1108 for a non-exhaustive table of tests that sometimes fail on `main`). #### Notes To hopefully simplify the diffs, this is the first PR in a planned sequence of PRs that will depend on each other. Subsequent PRs will be: 1. replace current schemas with latest release client only schemas (`3.1.9`, `3.3.6`) and regenerate code 2. add client only schema for `3.4.5` and regenerate code 3. add client only schema for `3.5.3` and regenerate code
I'm running into a new frequently (or always) failing test: Maybe it deserves to be added to the list. |
commit=50b42d013aee01536416e6334f99443f2b4f1e4c
How many failing tests does each job have?
How many tests fail once, twice, etc?
Previous tables of 30 jobs preserved below for reference: commit=50b42d013aee01536416e6334f99443f2b4f1e4c
How many failing tests does each job have?
How many tests fail once, twice, etc?
|
Running integration tests serially seems to have helped a lot, but we still get some intermittent failures. For example, quarantined integration tests even regularly pass, but just now we had this failure:
https://github.com/juju/python-libjuju/actions/runs/11319443801/job/31475510517?pr=1158 and in a second run:
https://github.com/juju/python-libjuju/actions/runs/11319443801/job/31476578410 with the third run passing |
EDIT: See my comment below for a table of current test failures on main.
Description
When trying to fix an issue with test failures run against PRs (e.g. #1088 identifies a breakage with
/merge
), I had multiple integration test failures from a relatively simple PR against main (#1106 pinning a dependency to the version before a recent release).To troubleshoot this, I made a simpler PR against main (#1107 editing CONTRIBUTORS), which also has integration test failures.
Here is a table showing the number of times each test failured over 4 runs of the integration tests on these two 2 PRs.
It would be ideal if the tests were deterministic.
Short of fixing the tests themselves, it would be good if the current state of the tests was prominently documented in contributing guidelines -- which test failures are likely to just be the tests being flaky and shouldn't block a merge. A separate issue can then be opened against that list to fix the individual tests.
Urgency
Blocker for our release
Python-libjuju version
main?
Juju version
the version the github workflow is using (doesn't look like versioning for juju etc is printed during test setup)
Reproduce / Test
Run integration tests on
main
.The text was updated successfully, but these errors were encountered: