Skip to content

osaluden/terraform-aws-eks-blueprints

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Amazon EKS Blueprints for Terraform

e2e-terratest plan-examples pre-commit

Welcome to Amazon EKS Blueprints for Terraform!

This repository contains a collection of Terraform modules that aim to make it easier and faster for customers to adopt Amazon EKS. It can be used by AWS customers, partners, and internal AWS teams to configure and manage complete EKS clusters that are fully bootstrapped with the operational software that is needed to deploy and operate workloads.

This project leverages the community terraform-aws-eks modules for deploying EKS Clusters.

Getting Started

The easiest way to get started with EKS Blueprints is to follow our Getting Started guide.

Documentation

For complete project documentation, please visit our documentation site.

Examples

To view examples for how you can leverage EKS Blueprints, please see the examples directory.

Usage

The below demonstrates how you can leverage EKS Blueprints to deploy an EKS cluster, a managed node group, and various Kubernetes add-ons.

module "eks_blueprints" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.0.2"

  # EKS CLUSTER
  cluster_version           = "1.21"
  vpc_id                    = "<vpcid>"                                      # Enter VPC ID
  private_subnet_ids        = ["<subnet-a>", "<subnet-b>", "<subnet-c>"]     # Enter Private Subnet IDs

  # EKS MANAGED NODE GROUPS
  managed_node_groups = {
    mg_m5 = {
      node_group_name = "managed-ondemand"
      instance_types  = ["m5.large"]
      subnet_ids      = ["<subnet-a>", "<subnet-b>", "<subnet-c>"]
    }
  }
}

module "eks_blueprints_kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.0.2"

  eks_cluster_id = module.eks_blueprints.eks_cluster_id

  # EKS Addons
  enable_amazon_eks_vpc_cni            = true
  enable_amazon_eks_coredns            = true
  enable_amazon_eks_kube_proxy         = true
  enable_amazon_eks_aws_ebs_csi_driver = true

  #K8s Add-ons
  enable_argocd                       = true
  enable_aws_for_fluentbit            = true
  enable_aws_load_balancer_controller = true
  enable_cluster_autoscaler           = true
  enable_metrics_server               = true
  enable_prometheus                   = true
}

The code above will provision the following:

  • âś… A new EKS Cluster with a managed node group.
  • âś… Amazon EKS add-ons vpc-cni, CoreDNS, kube-proxy, and aws-ebs-csi-driver.
  • âś… Cluster Autoscaler and Metrics Server for scaling your workloads.
  • âś… Fluent Bit for routing logs.
  • âś… AWS Load Balancer Controller for distributing traffic.
  • âś… Argocd for declarative GitOps CD for Kubernetes.
  • âś… Prometheus for observability.

Add-ons

EKS Blueprints makes it easy to provision a wide range of popular Kubernetes add-ons into an EKS cluster. By default, the Terraform Helm provider is used to deploy add-ons with publicly available Helm Charts.EKS Blueprints provides support for leveraging self-hosted Helm Chart as well.

For complete documentation on deploying add-ons, please visit our add-on documentation

Submodules

The root module calls into several submodules which provides support for deploying and integrating a number of external AWS services that can be used in concert with Amazon EKS. This includes Amazon Managed Prometheus and EMR on EKS. For complete documentation on deploying external services, please visit our submodules documentation.

Motivation

Kubernetes is a powerful and extensible container orchestration technology that allows you to deploy and manage containerized applications at scale. The extensible nature of Kubernetes also allows you to use a wide range of popular open-source tools, commonly referred to as add-ons, in Kubernetes clusters. With such a large number of tooling and design choices available however, building a tailored EKS cluster that meets your application’s specific needs can take a significant amount of time. It involves integrating a wide range of open-source tools and AWS services and requires deep expertise in AWS and Kubernetes.

AWS customers have asked for examples that demonstrate how to integrate the landscape of Kubernetes tools and make it easy for them to provision complete, opinionated EKS clusters that meet specific application requirements. Customers can use EKS Blueprints to configure and deploy purpose built EKS clusters, and start onboarding workloads in days, rather than months.

Support & Feedback

EKS Blueprints for Terraform is maintained by AWS Solution Architects. It is not part of an AWS service and support is provided best-effort by the EKS Blueprints community.

To post feedback, submit feature ideas, or report bugs, please use the Issues section of this GitHub repo.

For architectural details, step-by-step instructions, and customization options, see our documentation site.

If you are interested in contributing to EKS Blueprints, see the Contribution guide.


Requirements

Name Version
terraform >= 1.0.0
aws >= 3.72
helm >= 2.4.1
http 2.4.1
kubectl >= 1.14
kubernetes >= 2.10
local >= 2.1
null >= 3.1

Providers

Name Version
aws >= 3.72
http 2.4.1
kubernetes >= 2.10

Modules

Name Source Version
aws_eks terraform-aws-modules/eks/aws v18.17.0
aws_eks_fargate_profiles ./modules/aws-eks-fargate-profiles n/a
aws_eks_managed_node_groups ./modules/aws-eks-managed-node-groups n/a
aws_eks_self_managed_node_groups ./modules/aws-eks-self-managed-node-groups n/a
aws_eks_teams ./modules/aws-eks-teams n/a
aws_managed_prometheus ./modules/aws-managed-prometheus n/a
eks_tags ./modules/aws-resource-tags n/a
emr_on_eks ./modules/emr-on-eks n/a
kms ./modules/aws-kms n/a

Resources

Name Type
kubernetes_config_map.amazon_vpc_cni resource
kubernetes_config_map.aws_auth resource
aws_caller_identity.current data source
aws_eks_cluster.cluster data source
aws_iam_policy_document.eks_key data source
aws_iam_session_context.current data source
aws_partition.current data source
aws_region.current data source
http_http.eks_cluster_readiness data source

Inputs

Name Description Type Default Required
amazon_prometheus_workspace_alias AWS Managed Prometheus WorkSpace Name string null no
application_teams Map of maps of Application Teams to create any {} no
aws_auth_additional_labels Additional kubernetes labels applied on aws-auth ConfigMap map(string) {} no
cloudwatch_log_group_kms_key_id If a KMS Key ARN is set, this key will be used to encrypt the corresponding log group. Please be sure that the KMS Key has an appropriate key policy (https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/encrypt-log-data-kms.html) string null no
cloudwatch_log_group_retention_in_days Number of days to retain log events. Default retention - 90 days number 90 no
cluster_additional_security_group_ids List of additional, externally created security group IDs to attach to the cluster control plane list(string) [] no
cluster_enabled_log_types A list of the desired control plane logging to enable list(string)
[
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
no
cluster_encryption_config Configuration block with encryption configuration for the cluster
list(object({
provider_key_arn = string
resources = list(string)
}))
[] no
cluster_endpoint_private_access Indicates whether or not the EKS private API server endpoint is enabled. Default to EKS resource and it is false bool false no
cluster_endpoint_public_access Indicates whether or not the EKS public API server endpoint is enabled. Default to EKS resource and it is true bool true no
cluster_endpoint_public_access_cidrs List of CIDR blocks which can access the Amazon EKS public API server endpoint list(string)
[
"0.0.0.0/0"
]
no
cluster_identity_providers Map of cluster identity provider configurations to enable for the cluster. Note - this is different/separate from IRSA any {} no
cluster_ip_family The IP family used to assign Kubernetes pod and service addresses. Valid values are ipv4 (default) and ipv6. You can only specify an IP family when you create a cluster, changing this value will force a new cluster to be created string "ipv4" no
cluster_kms_key_additional_admin_arns A list of additional IAM ARNs that should have FULL access (kms:*) in the KMS key policy. list(string) [] no
cluster_kms_key_arn A valid EKS Cluster KMS Key ARN to encrypt Kubernetes secrets string null no
cluster_kms_key_deletion_window_in_days The waiting period, specified in number of days (7 - 30). After the waiting period ends, AWS KMS deletes the KMS key number 30 no
cluster_name EKS Cluster Name string "" no
cluster_security_group_additional_rules List of additional security group rules to add to the cluster security group created. Set source_node_security_group = true inside rules to set the node_security_group as source any {} no
cluster_service_ipv4_cidr The CIDR block to assign Kubernetes service IP addresses from. If you don't specify a block, Kubernetes assigns addresses from either the 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks string null no
cluster_service_ipv6_cidr The IPV6 Service CIDR block to assign Kubernetes service IP addresses string null no
cluster_timeouts Create, update, and delete timeout configurations for the cluster map(string) {} no
cluster_version Kubernetes <major>.<minor> version to use for the EKS cluster (i.e.: 1.21) string "1.21" no
create_cloudwatch_log_group Determines whether a log group is created by this module for the cluster logs. If not, AWS will automatically create one if logging is enabled bool false no
create_eks Create EKS cluster bool true no
create_iam_role Determines whether a an IAM role is created or to use an existing IAM role bool true no
create_node_security_group Determines whether to create a security group for the node groups or use the existing node_security_group_id bool true no
custom_oidc_thumbprints Additional list of server certificate thumbprints for the OpenID Connect (OIDC) identity provider's server certificate(s) list(string) [] no
eks_readiness_timeout The maximum time (in seconds) to wait for EKS API server endpoint to become healthy number "600" no
emr_on_eks_teams EMR on EKS Teams config any {} no
enable_amazon_prometheus Enable AWS Managed Prometheus service bool false no
enable_emr_on_eks Enable EMR on EKS bool false no
enable_irsa Determines whether to create an OpenID Connect Provider for EKS to enable IRSA bool true no
enable_windows_support Enable Windows support bool false no
environment Environment area, e.g. prod or preprod string "preprod" no
fargate_profiles Fargate profile configuration any {} no
iam_role_additional_policies Additional policies to be added to the IAM role list(string) [] no
iam_role_arn Existing IAM role ARN for the cluster. Required if create_iam_role is set to false string null no
iam_role_name Name to use on IAM role created string null no
iam_role_path Cluster IAM role path string null no
iam_role_permissions_boundary ARN of the policy that is used to set the permissions boundary for the IAM role string null no
managed_node_groups Managed node groups configuration any {} no
map_accounts Additional AWS account numbers to add to the aws-auth ConfigMap list(string) [] no
map_roles Additional IAM roles to add to the aws-auth ConfigMap
list(object({
rolearn = string
username = string
groups = list(string)
}))
[] no
map_users Additional IAM users to add to the aws-auth ConfigMap
list(object({
userarn = string
username = string
groups = list(string)
}))
[] no
node_security_group_additional_rules List of additional security group rules to add to the node security group created. Set source_cluster_security_group = true inside rules to set the cluster_security_group as source any {} no
openid_connect_audiences List of OpenID Connect audience client IDs to add to the IRSA provider list(string) [] no
org tenant, which could be your organization name, e.g. aws' string "" no
platform_teams Map of maps of platform teams to create any {} no
private_subnet_ids List of private subnets Ids for the cluster and worker nodes list(string) [] no
public_subnet_ids List of public subnets Ids for the worker nodes list(string) [] no
self_managed_node_groups Self-managed node groups configuration any {} no
tags Additional tags (e.g. map('BusinessUnit,XYZ) map(string) {} no
tenant Account name or unique account id e.g., apps or management or aws007 string "aws" no
terraform_version Terraform version string "Terraform" no
vpc_id VPC Id string n/a yes
worker_additional_security_group_ids A list of additional security group ids to attach to worker instances list(string) [] no
zone zone, e.g. dev or qa or load or ops etc... string "dev" no

Outputs

Name Description
amazon_prometheus_workspace_endpoint Amazon Managed Prometheus Workspace Endpoint
amazon_prometheus_workspace_id Amazon Managed Prometheus Workspace ID
cluster_primary_security_group_id Cluster security group that was created by Amazon EKS for the cluster. Managed node groups use this security group for control-plane-to-data-plane communication. Referred to as 'Cluster security group' in the EKS console
cluster_security_group_arn Amazon Resource Name (ARN) of the cluster security group
cluster_security_group_id EKS Control Plane Security Group ID
configure_kubectl Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig
eks_cluster_certificate_authority_data Base64 encoded certificate data required to communicate with the cluster
eks_cluster_endpoint Endpoint for your Kubernetes API server
eks_cluster_id Amazon EKS Cluster Name
eks_cluster_status Amazon EKS Cluster Status
eks_oidc_issuer_url The URL on the EKS cluster OIDC Issuer
eks_oidc_provider_arn The ARN of the OIDC Provider if enable_irsa = true.
emr_on_eks_role_arn IAM execution role ARN for EMR on EKS
emr_on_eks_role_id IAM execution role ID for EMR on EKS
fargate_profiles Outputs from EKS Fargate profiles groups
fargate_profiles_aws_auth_config_map Fargate profiles AWS auth map
fargate_profiles_iam_role_arns IAM role arn's for Fargate Profiles
managed_node_group_arn Managed node group arn
managed_node_group_aws_auth_config_map Managed node groups AWS auth map
managed_node_group_iam_instance_profile_arns IAM instance profile arn's of managed node groups
managed_node_group_iam_instance_profile_id IAM instance profile id of managed node groups
managed_node_group_iam_role_arns IAM role arn's of managed node groups
managed_node_group_iam_role_names IAM role names of managed node groups
managed_node_groups Outputs from EKS Managed node groups
managed_node_groups_id EKS Managed node groups id
managed_node_groups_status EKS Managed node groups status
oidc_provider The OpenID Connect identity provider (issuer URL without leading https://)
self_managed_node_group_autoscaling_groups Autoscaling group names of self managed node groups
self_managed_node_group_aws_auth_config_map Self managed node groups AWS auth map
self_managed_node_group_iam_instance_profile_id IAM instance profile id of managed node groups
self_managed_node_group_iam_role_arns IAM role arn's of self managed node groups
self_managed_node_groups Outputs from EKS Self-managed node groups
teams Outputs from EKS Fargate profiles groups
windows_node_group_aws_auth_config_map Windows node groups AWS auth map
worker_node_security_group_arn Amazon Resource Name (ARN) of the worker node shared security group
worker_node_security_group_id ID of the worker node shared security group

Security

See CONTRIBUTING for more information.

License

Apache-2.0 Licensed. See LICENSE.

About

Configure and deploy complete EKS clusters.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL 92.9%
  • Go 5.5%
  • Smarty 1.3%
  • Python 0.3%