You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A node which is not attached to any autoscaling group is created when the draining of node fails for some reason.
To Reproduce
Create a node which has pods in the pending state. This is one of the reasons why node draining fails. There are are other reasons why draining fails. We want to create a scenario where draining fails.
Create a CycleNodeRequest to create a new node and detaches the old node.
Create a CycleNodeStatus to drain the node.
Run both CycleNodeRequest and CycleNodeStatus.
Current behavior CycleNodeRequest works first, detaches the old node and creates a new node. CycleNodeStatus is not able to drain the node because it sees some pods in the pending status.
As a result we have an old node which is not attached to autoscaling group but still has pods running on it. New node comes up but only has daemonsets running on it.
Expected behavior
The old node should be drained and the new node should have all the pods from the old node
There should not be any node which is not attached to AutoScaling group.
Kubernetes Cluster Version v1.25
Cyclops Version v1.7.0
The text was updated successfully, but these errors were encountered:
@skaushal-splunk Are you able to clarify what you mean by "Create a node which has pods in the pending state"?
As far as I'm aware, this is unlikely to be possible - a pod can't be scheduled onto a node and still be pending? Is there another factor causing the pods to be in this state that you aren't mentioning?
Describe the bug
A node which is not attached to any autoscaling group is created when the draining of node fails for some reason.
To Reproduce
CycleNodeRequest
to create a new node and detaches the old node.CycleNodeStatus
to drain the node.CycleNodeRequest
andCycleNodeStatus
.Current behavior
CycleNodeRequest
works first, detaches the old node and creates a new node.CycleNodeStatus
is not able to drain the node because it sees some pods in the pending status.As a result we have an old node which is not attached to autoscaling group but still has pods running on it. New node comes up but only has daemonsets running on it.
Expected behavior
Kubernetes Cluster Version
v1.25
Cyclops Version
v1.7.0
The text was updated successfully, but these errors were encountered: