Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CCE : initial_node_count should not trigger scale down #975

Closed
gbrd opened this issue Aug 31, 2023 · 1 comment · Fixed by #1061
Closed

CCE : initial_node_count should not trigger scale down #975

gbrd opened this issue Aug 31, 2023 · 1 comment · Fixed by #1061
Milestone

Comments

@gbrd
Copy link
Contributor

gbrd commented Aug 31, 2023

I use CCE

I also use the autoscaler.

Here is the scenario :

  • I set initial_node_count to 1 for my node pool in my tarraform config

  • later the node pool scales (thanks to autoscaler) to 2 nodes

  • if I apply again my terraform script (for any other changes elsewhere unrelated), my node count in my node pool is forced back to 1.

How can I set initial value (required by terraform provider) to 1 by tell it not to be changed later if autoscaler decided to change it.

Expected Behavior

node_pool not touched (initial_node_count should only be used on creation)

Actual Behavior

node_pool forced to count = initial_node_count

Steps to Reproduce

Create a node_pool with initial_node_count=1 via terraform
add some pods to have autoscaler scale it to 2 nodes
run terraform apply again

@ShiChangkuo
Copy link
Collaborator

Sorry for the delay!

The issue will be fixed in the next release in the next week.

Now, you can add lifecycle block in node pool resource as a workaround:

resource "flexibleengine_cce_node_pool_v3" "pool" {
  # ...

  lifecycle {
    ignore_changes = [
      initial_node_count,
    ]
  }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants