Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It is not possible to adjust arbiter's resource limits separately from cluster nodes #962

Open
anikievev opened this issue Apr 29, 2022 · 8 comments

Comments

@anikievev
Copy link

We can not adjust resource limits for arbiter itself. Currently, it is just wasting resources, because we need arbiter only for election

Expected behavior:
We should be able to adjust it separately.

Currently, we use such yaml:

apiVersion: mongodbcommunity.mongodb.com/v1
kind: MongoDBCommunity
metadata:
  name: example-mongodb
spec:
  additionalMongodConfig:
    storage.wiredTiger.engineConfig.journalCompressor: zlib
  members: 2
  arbiters: 1
  security:
    authentication:
      ignoreUnknownUsers: true
      modes:
      - SCRAM
  type: ReplicaSet
  users:
  - db: admin
    name: user
    passwordSecretRef:
      name: user-password
    roles:
    - db: admin
      name: clusterAdmin
    - db: admin
      name: userAdminAnyDatabase

    scramCredentialsSecretName: my-scram
  
  statefulSet:
    spec:
      template:
        spec:
          # resources can be specified by applying an override
          # per container name.
          containers:
            - name: mongod
              resources:
                limits:
                  cpu: "8"
                  memory: 32G
                requests:
                  cpu: "8"
                  memory: 32G
  version: 4.4.13

@github-actions
Copy link
Contributor

github-actions bot commented Jul 2, 2022

This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days.

@github-actions github-actions bot added the stale label Jul 2, 2022
@siegenthalerroger
Copy link

Not stale, still not possible

@github-actions github-actions bot removed the stale label Jul 11, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Sep 9, 2022

This issue is being marked stale because it has been open for 60 days with no activity. Please comment if this issue is still affecting you. If there is no change, this issue will be closed in 30 days.

@github-actions github-actions bot added the stale label Sep 9, 2022
@pdfrod
Copy link

pdfrod commented Sep 10, 2022

Still relevant.

@github-actions github-actions bot removed the stale label Sep 11, 2022
@targs08
Copy link

targs08 commented Oct 9, 2022

Still relevant

@dan-mckean
Copy link
Collaborator

Hi, this isn't currently on our roadmap, but we do see the value in what's being asked.

We will do this at some point, but no current estimate for when.

@ajjaiii
Copy link

ajjaiii commented Oct 24, 2024

any update for this issue ?

@dosmanak
Copy link

dosmanak commented Nov 20, 2024

Please implement the feature. I need to set Guaranteed QoS on mongo pods, but I can't reserve the same amount of CPUs for arbiter.

It should be enough to modify this part
https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/controllers/replica_set_controller.go#L482

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants