-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When rollout new controller version, it should not trigger a rolling update #240
Comments
/assign |
For the short term, lets patch the current release with a special fix for the new field we added, the NetworkConfig. Basically if the NetworkConfig is set to the default values, set it to nil in the controller when calculating the hash, this should avoid changing the hash value when the new controller is rolled out. |
yep, this could be a short term fix |
For the short term fix, don't we already exclude NetworkConfig if it is nil https://github.com/kubernetes-sigs/lws/blob/main/pkg/utils/utils.go#L44? @ahg-g |
The issue is that the NetworkConfig will not be nil since it will be defaulted, which means the hash for an existing deployment will be different from the one stored as an annotation and generated previously by the older operator version. |
I thought it isn't defaulted when upgrading the LWS controller #197 (comment), and that's why we added the nil check? |
It is getting defaulted:
But can you test that on a real cluster? |
I tested upgrading from 0.2.0 to 0.4.1 and the upgrade did not trigger an upgrade of the statefulSets (so the template hash was not updated) |
Check if the NetworkConfig get defaulted, if not, try to tigger an update on LWS that doesn't trigger a rolling update to trigger calling the defaulting function. |
Upgraded from 0.2.0 to 0.4.1. NetworkConfig does not get defaulted. Updated LWS object by removing a metadata label to trigger defaulting option, NetworkConfig gets defaulted and rollingUpdate is triggered. |
What happened:
In the rolling update triggering logic, we should store the template somewhere else, and only compare the fields with the original one, and check if these fields are updated, then decides to trigger a rolling update. This will mitigate the issue when we added a API field, it will not trigger rolling update when rolling out new controller version
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):git describe --tags --dirty --always
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: