You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, if user deploys something that's broken and causes the first pod of the rack to not start, we require the user to manually set "forceUpgradeRacks" property which will then override the current configuration despite the datacenter not being ready from previous change.
This process feels like something we could automate. If we're still processing the first pod (but rest are up), we should probably allow user to fix the configuration and then we would apply it to make the cluster healthy again. In other words, try to detect when "forceUpgradeRacks" is going to be required and just simply apply it in our internal process.
Even better would be to rollback to previous setting, but we don't want to downgrade the user set CRDs by ourselves as that could mess up with tools like Argo.
Why is this needed?
Automatic recovery is not easy if it requires modifying the CRD and then cass-operator will modify the CRD to ensure next time it's not applicable. We want to simplify the user experience by automatically detecting this issue as these incorrect settings do appear every now and then (user mistakes happen).
┆Issue is synchronized with this Jira Story by Unito
┆Fix Versions: 2024-10
┆Issue Number: CASS-17
The text was updated successfully, but these errors were encountered:
If we're still processing the first pod (but rest are up)
Why would we need the rest of the pods to be up?
I'm thinking that if all pods are down then it's ok to apply the upgrade as well. I ran into this case where the serverImage provided was wrong and the image couldn't be pulled, which cannot be fixed unless you force the rack upgrade.
We need to be able to detect what a failed state would be for a pod in the statefulset (it could be pending scheduling, or having wrong coordinates for the image, etc...).
If the change was applied to a single pod (the revision of the other pods is still the previous one) and the last pod isn't in "Running" state (it's the first updated one), then we allow a new update to the statefulset in order to fix what was preventing the pod from starting.
What is missing?
Currently, if user deploys something that's broken and causes the first pod of the rack to not start, we require the user to manually set "forceUpgradeRacks" property which will then override the current configuration despite the datacenter not being ready from previous change.
This process feels like something we could automate. If we're still processing the first pod (but rest are up), we should probably allow user to fix the configuration and then we would apply it to make the cluster healthy again. In other words, try to detect when "forceUpgradeRacks" is going to be required and just simply apply it in our internal process.
Even better would be to rollback to previous setting, but we don't want to downgrade the user set CRDs by ourselves as that could mess up with tools like Argo.
Why is this needed?
Automatic recovery is not easy if it requires modifying the CRD and then cass-operator will modify the CRD to ensure next time it's not applicable. We want to simplify the user experience by automatically detecting this issue as these incorrect settings do appear every now and then (user mistakes happen).
┆Issue is synchronized with this Jira Story by Unito
┆Fix Versions: 2024-10
┆Issue Number: CASS-17
The text was updated successfully, but these errors were encountered: