-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[New ephemeral]: aws_eks_cluster_auth should be turned into an ephemeral resource #40343
Comments
Community NoteVoting for Prioritization
Volunteering to Work on This Issue
|
shouldn't you use the |
Using the exec parameter does solve the issue. I imagine that this is what many do today, including my organization. It does add reliance on an additional executable though and therefore reduces portability. The executable ist another component that needs to be maintained in CI environments where it might otherwise not be needed. |
do users (humans) access your clusters today (i.e. - kubectl)? and what do you use for CI today? |
At a different company but we have a thin wrapper around kubectl and use both GitHub Actions and Jenkins to manage our clusters. We would love to see this support as we have multiple times had tokens expire at first time build of a heavy EKS cluster that takes hours to provision. |
If you're using kubectl, are you using |
@bryantbiggs Running aws cli is not possible in managed services like Terraform Cloud. |
Users do have access using kubectl and whatever else they like to use that can work with provided kubeconfig files. We're not using aws cli to manage kubeconfig files though, but not for technical reasons iirc. CI is mainly Atlantis plus some pipelines running on GitLab CI. Both of them are possible to use with additional tooling but the simpler solution is preferable to me. |
Hi! Thank you for reporting this issue, Terraform (1.10) supports referencing ephemeral resource attributes directly in providers. And, having an ephemeral variant available of If I recall correctly, both Helm and Kubernetes provide an The exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", data.aws_eks_cluster.cluster.name]
command = "aws"
} However, this isn’t very native to Terraform, as CI/CD environments must manage an additional binary and ensure it’s available at runtime—one more thing to keep track of! I’m not a subject matter expert on the FWIW, using an ephemeral resource: ephemeral "aws_eks_cluster_auth" "example" {
name = data.aws_eks_cluster.example.id
}
provider "kubernetes" {
host = data.aws_eks_cluster.example.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data)
token = ephemeral.aws_eks_cluster_auth.example.token
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.example.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data)
token = ephemeral.aws_eks_cluster_auth.example.token
}
} Over in #40660 I'm prototyping this, and will provide an update soon. Thanks again! |
in the respective providers, the The implementation looks similar to what is done in the awscli, but the risk is that the implementation changes but the method doesn't. The contract to users is that |
The |
That just sounds like you need to rethink your configs and maybe refactor a bit |
Negative. The configs are fine. It's just a lot of environments and customers. Please don't deflect. |
So why are profiles causing an issue - you can specify a profile in the AWS provider and you can pass I'm not seeing the issue |
This doesn't make sense. The difference between the To give a more concrete example - cluster upgrades. If you use the static token route, you are almost guaranteed to hit the expired token issue because the CP takes some time to update (8-10 minutes in the case of EKS), and then some additional time for node groups/etc. before you get down to the point where you start trying to use the token for Kubernetes/Helm resources. If you use the |
That's the exact definition of the tight coupling I'm talking about. Everyone (and every CI service) that runs the config then needs the exact named profile defined in their shell config (and of course, as everyone else is complaining about, also needs the aws cli installed). It doesn't work for teams larger than a single person. Rather than simply chaining from the aws provider config, which already has loads of options for resolving an aws credential.
It does make sense. The ephemeral data source resolves one credential at plan time, and then executes again at apply time and resolves another credential with a new expiration period. For execution models that save the plan and then run an apply with a saved plan at a future time, this is a very big deal. |
So you said:
So you are already "tightly coupled" between your AWS profiles and the AWS provider, no? Because you have to specify the profile you want to use on the respective provider, such as: provider "aws" {
profile = "foo"
region = "us-east-1"
} With an exec provider that looks like: provider "kubernetes" {
host = yyy
cluster_ca_certificate = base64decode(xxx)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", "example", "--profile", "foo"]
command = "aws"
}
} Or if you are managing your profiles out of band (i.e. - loading the respective creds in the environment using the profile), its just: provider "aws" {
region = "us-east-1"
} Which means your exec is: provider "kubernetes" {
host = yyy
cluster_ca_certificate = base64decode(xxx)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
args = ["eks", "get-token", "--cluster-name", "example"]
command = "aws"
}
} So I don't see how the |
That again requires the aws cli to be present (simply setting the profile in aws provider does not require the aws cli), and specific named profiles to be configured on every system that executes that config. That's not reasonable. More typically, the resolution of the credential is up to the user or the executing environment. The "profile" is just an input variable, not hardcoded. Maybe it's I get the hesitation and push back on supporting options native to the cloud providers... The Kubernetes provider doesn't want to have to bundle SDKs for every cloud provider and maintain those code paths, and the Terraform community doesn't want to introduce cross-provider dependencies. But the limitation just makes the user experience kinda awful and limits the use cases. Regardless, that's all off topic. An ephemeral data source for |
Description
The data source
aws_eks_cluster_auth
causes the token to be saved in the plan and potentially expiring before the apply. This is discussed in #13189. The new ephemeral resources in terraform 1.10 should address this perfectly:Requested Resource(s) and/or Data Source(s)
Potential Terraform Configuration
References
Original issue: #13189.
Documentation:
Example of other ephemeral resources in terraform-provider-aws:
aws_secretsmanager_secret_version
andaws_kms_secrets
#40009Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: