Replies: 6 comments
-
It would be a bit constraining to make people have things in the same git repo for this to work -- on the one hand, just because they A and B in the same repo, doesn't mean the dependency works that way (you might care only that A exists before B, and not care if they update in a different order); and on the other, all else being equal it's entirely possible that A and B are in different git repos. Having said that, I've tried before to come up with a design for supporting the more general case, and got stuck. You have to have some link between the versions being synced -- being the same version of the git repo works for some situations, but if it's not that, what is it? It has to be explicitly stated somewhere, even if it's based on a convention (like being the same semver -- maybe being the same semver would be enough). |
Beta Was this translation helpful? Give feedback.
-
Please see: |
Beta Was this translation helpful? Give feedback.
-
We wrote a controller that deployed helm charts that encountered this same issue. Our really hacky workaround was to sleep for a few seconds before checking a dependency. Dependency checks also involved checking the state of the resource against the helm release record to ensure that the dependency was actually ready and that the version of the release was not outdated against the version of the HelmRelease equivalent. For example: |
Beta Was this translation helpful? Give feedback.
-
We have an upgrade scenario with this issue. Any further thoughts? What would happen if the update changed the probes such that the dependency no longer appeared healthy until the upgrade happened? Effectively change the probe from "are you there" to "are you running V2" ? |
Beta Was this translation helpful? Give feedback.
-
Why do we need this at all? It seems that the main problem is that flux would crash on an error and stop reconciliation. |
Beta Was this translation helpful? Give feedback.
-
I'm encountering this problem after setting up pre/post jobs as described in the docs: https://fluxcd.io/flux/use-cases/running-jobs/ Are there any known workarounds? I currently have it so the "deploy" pods themselves check to see if the jobs have run (e.g. wait until the database is fully migrated before becoming ready). But it would be nicer if the rollouts didn't even start until the jobs were finished. Even just being able to tell the controller to wait 30s before checking its dependencies would be enough (yeah it's hacky but it would be a decent workaround). |
Beta Was this translation helpful? Give feedback.
-
The
dependsOn
feature in kustomize-controller and helm-controller works well for initialization. Since the dependencies won't yet exist, the dependency wait happens as expected. However for update scenarios, the dependencies will already exist and be in a ready / latest generation observed state. Since Kubernetes only allows updating a single resource at a time and the updates are not dependency ordered by the kustomize-controller, the reconciliation of the updates will accordingly not be dependency ordered.For the kustomize-controller I can imagine a solution where the dependency check, in addition to checking for readiness / latest generation observed of dependencies, also checks that if a dependency has the same sourceRef, it also should have the same
status.appliedSourceRevision
that is about to be applied to the dependent Kustomization.For helm-controller, the dependency check could use the labels added by the kustomize-controller to each resource that it applies to check that if the dependency HelmRelease has the same Kustomization namespace/name labels as the dependent HelmRelease, then it should also have the same Kustomization checksum label. Another option for the user would be to put their HelmReleases in different
Kustomization
s and rely on thedependsOn
of those.Beta Was this translation helpful? Give feedback.
All reactions