Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Resource rancher2_cluster_v2 is marked as completed much too early #1430

Open
pwurbs opened this issue Oct 23, 2024 · 1 comment
Open
Labels

Comments

@pwurbs
Copy link

pwurbs commented Oct 23, 2024

Rancher Server Setup

  • Rancher version: 2.9.1
  • Installation option (Docker install/Helm Chart): Helm
    • If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): RKE2

Information about the Cluster

  • Kubernetes version: v1.27.16+rke2r2
  • Cluster Type (Local/Downstream): Downstream
    • If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider): Custom

User Information

  • What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom): Admin

Provider Information

  • What is the version of the Rancher v2 Terraform Provider in use? v5.0.0
  • What is the version of Terraform in use? v1.5.7

Describe the bug

The creation of a rancher2_cluster_v2 resource is shown as completed much too early, after only 7sec.
The cluster is visible in Rancher but neither it has the Active status nor are any nodes registered.

To Reproduce

  • Create the rancher2_cluster_v2
  • Observe the output of Terraform
16:00:41  module.rancher-cluster.rancher2_cluster_v2.cluster: Creating...
16:00:48  module.rancher-cluster.rancher2_cluster_v2.cluster: Creation complete after 5s [id=fleet-default/reference-rke2]

Actual Result

The creation of a rancher2_cluster_v2 resource is shown as completed much too early, after only 7sec.

Expected Result

The resource creation should only be completed if the cluster has the actual status "Active" in Rancher.

@pwurbs
Copy link
Author

pwurbs commented Oct 25, 2024

This is my workaround:

resource "null_resource" "wait_for_active_cluster" {
  triggers = {
    always_run = "${timestamp()}"  # Ensures it runs when needed
  }
  provisioner "local-exec" {
    command = <<EOT
      while true; do
        STATUS=$(curl -s -k -H "Authorization: Bearer ${var.rancher_token}" \
        "${var.rancher_url}/v3/clusters/${rancher2_cluster_v2.cluster.cluster_v1_id}" | grep -o '"state":"[^"]*' | sed 's/"state":"//')
        
        if [ "$STATUS" = "active" ]; then
          echo "Cluster is active!"
          exit 0
        fi
        
        echo "Cluster status: $STATUS. Waiting..."
        sleep 30
      done
    EOT
  }
  depends_on = [rancher2_cluster_v2.cluster]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant