-
Notifications
You must be signed in to change notification settings - Fork 26
pipelinesNamespace #604
base: master
Are you sure you want to change the base?
pipelinesNamespace #604
Conversation
Can you take a look and see how this looks so far, thanks Move the triggers Role out of the deploy script and into the orchestration, which gets applied along with the RoleBinding as it was Kabanero.Spec.PipelinesNamespace - the desired pipelinesNamespace featured_stacks kabaneroplatform_controller stack-controller stack_controller TODO
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to apply this code to my cluster and am stuck in a loop, where the kabanero controller creates the stacks in my stack index, but then on the next pass of the reconciler it deletes the stacks because the Stack CR instance has no pipelinesNamespace
defined. And indeed it does not - but I put a Printf
in the code where we you add the pipelinesNamespace when the Stack CR instance is created. So, I'm really not sure what is going on here. Maybe I just have the code applied incorrectly to my cluster.
I think that we need to think about what happens when Champ is managing his Stack CR instances manually, and so changes the pipelinesNamespace
in the Stack CR instance. I see that when the pipelinesNamespace
changes in the Kabanero CR instance, we'll delete any Stack CR instances that have the old pipelinesNamespace
, which is fine, but:
- we're not supposed to modify any Stack CR instances (versions) that have
desiredState
set, because these are manually managed by Champ - I don't think we have code to handle with the namespace in the Stack CR instance changes. I think that's why the kabanero controller is doing a delete/create. I think we're going to have to handle the change case, either with this PR, or immediately after in a follow-on issue/PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Dan - I fixed my dumb mistake and ran with the correct versions of the admission controller webhook and the stack controller, and I didn't get into that loop where the Stacks were cycling anymore.
I think that there are two things we need to fix:
- We can't delete a Stack, even if the pipelinesNamespace is wrong, when one or more of the versions has
desiredState
set to some non-nil value. There is other code that syncs up the Stack CRs with the contents of the stack hub index yaml, which checksdesiredState
. Hopefully you can use that as a model for what to do here. The Stacks with the wrongpipelinesNamespace
will not work, but we can't delete them, the admin will not be expecting them to be deleted. - I made a comment about how subsequent runs of the kabanero controller reconciler will delete the roles that let the stack controller create the pipelines in the pipelines namespace. I think we need to figure out another way to do book-keeping of what namespaces have roles/bindings that need to be deleted, and then delete them when we can. What we are doing now, causes the current namespace's roles/bindings to be deleted if the reconciler runs again.
Once these are fixed we can get with Steve Kinder and find out how and when he wants to deliver this.
} | ||
} | ||
|
||
if canCleanup == true { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you are doing here.... I had not considered this before. Yes we need to leave permissions around long enough for the stack controller to clean up the pipelines, etc, in the old namespace. However when I tried switching the namespace on my cluster, this did not behave as I expected.
Old pipelinesNamespace was fred
, new was kabanero
. First time thru, it created stuff in kabanero
and I could see things transitioning to kabanero
. But then the kabanero controller reconcile was driven again, and this time it deleted all of the roles and bindings from kabanero
. I think this code needs to be able to detect that there is cleanup work to do for a namespace, and then when it's done, put a mark on the wall or something so that it doesn't try to do it again.
pipelinesNamespace := pipelinesNamespace(k) | ||
for _, deployedStack := range deployedStacks.Items { | ||
if deployedStack.Spec.PipelinesNamespace != pipelinesNamespace { | ||
err := cl.Delete(ctx, &deployedStack) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before we delete a stack with the incorrect pipeline namespace, we have to be sure that none of the versions have desiredState
set to some value. The desiredState
is a clue that the stack is being manually managed. I believe there is other code in here that checks the desiredState
when attempting to sync up with the stack hub index yaml.
In-Progress