-
Notifications
You must be signed in to change notification settings - Fork 509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Operator falsely errors and does not let to upgrade MongoDB cluster replicas #1613
Comments
hey @KarooolisZi thanks for opening an Issue! looking at your applied sts and applied cr one can see that you've increased the storage of the pvcClaim. mdbc
vs sts
I suggest that you update your claim to be equal to the one in the sts and you should be fine |
@nammn |
@KarooolisZi you can try the following to have the operator use the new pvc sizes, please note that this is a limitation of sts. You cannot resize the storage used by an sts. Read more here: kubernetes/enhancements@763b35f steps:
|
@nammn Yes, I am aware of limitation just thinking about the workaround. |
yes, since the pvcs are having the same name and therefore the sts will re-attach them, all assuming you didn't change the name. Closing this one, since the issue itself was a miss-configuration. |
Yes, great misconfiguration. I have definitely spent too much time on it.. |
@nammn After doing so, my sts shows as 0/0 with replicas 2 and does not recreate pods. I have one pod but it is not in sts somehow. It looks like adding an arbiter crashes operator. Trying to do 2 members and 1 arbiter. 2 members work perfectly. After getting it to work, I get 0/2 as mongo agent is not ready after adding arbiter. Strange behaviour |
@KarooolisZi please open a new issue and provide the required information as given in the issue template |
Hello,
I was trying to adjust replicas number in the CR MongoDB yaml manifest.
Only change to current CR was changing 'replicas' from 2 to 3.
That is strange because according to operator I should be able to do this. My statefulset is not scaling. I checked last applied configurations, last success apply and no any differences were there. I compared these configuration to my VCS configuration. No changes were detected.
Using 6.0.4 MongoDB community version.
Using 0.7.8 operator version.
I tried removing annotations for existing CRD which states as 'failed' after apply and reapply again - no result. Nothing was changed so previous setup is still online and working. However, it prompts errors on operator without any reasons even after applying same configuration with 2 'replicas' again.
I have another environment with same specific operator and MongoDB versions. I was able to add replica and even arbiter to specs there. That was also the only change made to CR of MongoDB.
The error I get:
ERROR controllers/mongodb_status_options.go:104 Error deploying MongoDB ReplicaSet: error creating/updating StatefulSet: error creating/updating StatefulSet: StatefulSet.apps “mongodb” is invalid: spec: Forbidden: updates to statefulset spec for fields other than ‘replicas’, ‘ordinals’, ‘template’, ‘updateStrategy’, ‘persistentVolumeClaimRetentionPolicy’ and ‘minReadySeconds’ are forbidden
What did you do to encounter the bug?
Steps to reproduce the behavior:
spec.members: 2
tospec.members: 3
kubectl apply -f database.yaml
What did you expect?
I expected operator to add an additional member to existing MongoDB cluster in statefulset. Making members number 3 instead of existing 2.
What happened instead?
Statefulset of MongoDB still had 2 members. MongoDB operator threw error (pasted in description).
Screenshots
If applicable, add screenshots to help explain your problem.
Operator Information
Kubernetes Cluster Information
If possible, please include:
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 2024-09-04T07:11:25.133Z INFO controllers/replica_set_controller.go:137 Reconciling MongoDB {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.134Z DEBUG controllers/replica_set_controller.go:139 Validating MongoDB.Spec {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.134Z DEBUG controllers/replica_set_controller.go:148 Ensuring the service exists {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.134Z DEBUG agent/replica_set_port_manager.go:122 No port change required {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.142Z INFO controllers/replica_set_controller.go:463 Create/Update operation succeeded {"ReplicaSet": "mongodb-<NAME>/mongodb", "operation": "updated"} 2024-09-04T07:11:25.142Z DEBUG controllers/replica_set_controller.go:409 Scaling up the ReplicaSet, the StatefulSet must be updated first {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.142Z INFO controllers/replica_set_controller.go:330 Creating/Updating StatefulSet {"ReplicaSet": "mongodb-<NAME>/mongodb"} 2024-09-04T07:11:25.151Z ERROR controllers/mongodb_status_options.go:104 Error deploying MongoDB ReplicaSet: error creating/updating StatefulSet: error creating/updating StatefulSet: StatefulSet.apps "mongodb" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden github.com/mongodb/mongodb-kubernetes-operator/controllers.messageOption.ApplyOption /workspace/controllers/mongodb_status_options.go:104 github.com/mongodb/mongodb-kubernetes-operator/pkg/util/status.Update /workspace/pkg/util/status/status.go:25 github.com/mongodb/mongodb-kubernetes-operator/controllers.ReplicaSetReconciler.Reconcile /workspace/controllers/replica_set_controller.go:200 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:114 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:311 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227
mongo-<>
. For instance:kubectl get mdbc -oyaml
kubectl get sts -oyaml
kubectl get pods -oyaml
kubectl exec -it mongo-0 -c mongodb-agent -- cat /var/lib/automation/config/cluster-config.json
kubectl exec -it mongo-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
kubectl exec -it mongo-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/automation-agent-verbose.log
The text was updated successfully, but these errors were encountered: