Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dbp 333 expand the terraform module for autoscaling functionality #17

Merged
Merged
Show file tree
Hide file tree
Changes from 84 commits
Commits
Show all changes
100 commits
Select commit Hold shift + click to select a range
a8ed6b9
Initial commit with changes to introduce auto scaling node pools
marhode Oct 27, 2023
91fd096
Test changes
marhode Nov 1, 2023
fd8f71c
Made changes to the nodepools for scaling testing
marhode Nov 1, 2023
2fb90f7
Additions to test the terraform nodepool creation
marhode Nov 2, 2023
c002756
Fix of error due to variable definition and expansion of the for loop…
marhode Nov 3, 2023
4bcf10f
Fix for object creation constructor
marhode Nov 3, 2023
f12b253
Terraform function call fix
marhode Nov 3, 2023
5ddddaa
Fixed the ip pool creation the new nodepool creation
marhode Nov 3, 2023
a3afb82
Fix variables in nodepool object
marhode Nov 3, 2023
8c72f61
Fixed typo
marhode Nov 3, 2023
471d36f
Removed count for now
marhode Nov 3, 2023
25893ed
Fixes towards legacy object creation to be filled with correct variables
marhode Nov 6, 2023
4008c1d
Changed definition of custom_nodepools in locals
marhode Nov 6, 2023
ba91cb8
Typo in availabilityzone fixed
marhode Nov 6, 2023
4986ce1
Rename ippools resource
marhode Nov 6, 2023
65d0b31
Fixed naming of ippools
marhode Nov 6, 2023
8482d5a
Enable to use both legacy and scaling nodepools
marhode Nov 7, 2023
873022b
Fixes to variable scopes
marhode Nov 7, 2023
bdc44ce
Fix to the legacy object for merging
marhode Nov 7, 2023
d0f1965
Test for merge of legacy and scaling lists
marhode Nov 7, 2023
badd726
Changed merge to concat of two lists
marhode Nov 7, 2023
2e044d1
turn object tolist for concat
marhode Nov 7, 2023
a4387a5
Fixed syntax error
marhode Nov 7, 2023
7ef35ab
Test for conditional fix
marhode Nov 7, 2023
41e8294
Try setunion to combine both object lists together
marhode Nov 7, 2023
fae9c1d
Added missing variables to object
marhode Nov 7, 2023
613e6e8
added tolist
marhode Nov 7, 2023
2106b91
Fixed associated lans
marhode Nov 7, 2023
5512de3
Small readability adjustments
marhode Nov 8, 2023
8bba887
Testing purpose legacy min and max node count equals node count
marhode Nov 9, 2023
3182dba
Undo last test because it doesnt work
marhode Nov 9, 2023
6648900
Small fix
marhode Nov 9, 2023
5e7cf74
Simplified check for legacy and scaling deployment
marhode Nov 9, 2023
f09ba9b
Test with legacy only
marhode Nov 9, 2023
1d43bf6
Test with legacy and scaling = false
marhode Nov 9, 2023
ca83656
Test nodepool labels
marhode Nov 10, 2023
e566fdc
Corrected nodepool name
marhode Nov 13, 2023
dc5e74f
changed legacy nodepool name
maxi418 Nov 16, 2023
01ddd83
changed legacy nodepool name
maxi418 Nov 20, 2023
47cc49e
added moved block
maxi418 Nov 20, 2023
057d827
removed moved block
maxi418 Nov 20, 2023
908a05b
enabled outputs and variables for compatibility
maxi418 Nov 22, 2023
2d04b11
remove variables
maxi418 Nov 22, 2023
9cc5837
Added ip pool creation code to be downward compatible with sc-legacy
marhode Nov 23, 2023
a9aefb9
Added optional ip list
marhode Nov 23, 2023
996a713
Fixed typo
marhode Nov 23, 2023
58441b5
Added capabilities to include custom ip lists
marhode Nov 23, 2023
47b3582
Removed bracelets
marhode Nov 23, 2023
9cc6d9c
Changed the name of non-scaling nodepools
marhode Nov 23, 2023
1ab4b6f
removed index due to not found fail
marhode Nov 23, 2023
d3eb599
Fixed inconsistency of types in ip pools
marhode Nov 23, 2023
793a549
Missed one null
marhode Nov 23, 2023
6d5f456
Changed the default public_ips to list of empty list
marhode Nov 23, 2023
a4e5e02
fix typo
marhode Nov 23, 2023
0326e0a
Changed variable to availability_zone
marhode Nov 23, 2023
123b0d5
Test for ip lists inconsistence
marhode Nov 23, 2023
7a123af
Added second addition to ip pool list selection
marhode Nov 23, 2023
ff3ed9e
Changed list of list of ips to map of list of list of list
marhode Nov 23, 2023
5877e7a
changed public_ips definition
maxi418 Nov 23, 2023
0f4f573
fixed typing of lists
maxi418 Nov 23, 2023
ffb6fa9
Added slice to ip pool list
marhode Nov 24, 2023
ec432ba
Added +1 to the slice of ip lists
marhode Nov 24, 2023
a2a81a7
Adjusted maintenance times
marhode Nov 24, 2023
81a99a8
increment nodepool counter in name
maxi418 Nov 24, 2023
68e860d
removed cluster name from key
maxi418 Nov 24, 2023
35df423
Merge branch 'DBP-333-Expand-the-terraform-module-for-autoscaling-fun…
maxi418 Nov 24, 2023
1a63db8
changed for-each loop
maxi418 Nov 24, 2023
3f8de6d
Removed cluster name from keys to match between ip and nodepools
marhode Nov 27, 2023
ac0091c
fixed nodepool-count problem
maxi418 Nov 27, 2023
e0cfae2
Merge remote-tracking branch 'origin/main' into DBP-333-Expand-the-te…
maxi418 Nov 27, 2023
485c794
terraform-docs: automated action
github-actions[bot] Nov 27, 2023
1a87886
changed name for readability
maxi418 Nov 29, 2023
a75716c
add ids to output
maxi418 Nov 30, 2023
529e9c5
terraform-docs: automated action
github-actions[bot] Nov 30, 2023
567d4a7
added ids for scaling nodepools
maxi418 Nov 30, 2023
68bb462
Merge branch 'DBP-333-Expand-the-terraform-module-for-autoscaling-fun…
maxi418 Nov 30, 2023
cb399e1
fixed public ips for empty list
maxi418 Nov 30, 2023
5735a4a
added output for ippools
maxi418 Nov 30, 2023
abf3376
terraform-docs: automated action
github-actions[bot] Nov 30, 2023
87c71f6
updated ip output
maxi418 Nov 30, 2023
430fc11
Merge branch 'DBP-333-Expand-the-terraform-module-for-autoscaling-fun…
maxi418 Nov 30, 2023
90c590c
extended public ips condition for custom nodes
maxi418 Nov 30, 2023
80368f2
fixed ippools if no public ippools needed
maxi418 Nov 30, 2023
d72dac9
fix public ips
maxi418 Nov 30, 2023
40251bd
test slice function
maxi418 Nov 30, 2023
d366c71
test slice function
maxi418 Nov 30, 2023
b89702d
test slice function
maxi418 Nov 30, 2023
681f400
test slice function
maxi418 Nov 30, 2023
3573d23
added index for public ip conditions
maxi418 Nov 30, 2023
a791b2d
public ip generation
maxi418 Nov 30, 2023
90f2290
test nodepool output
maxi418 Nov 30, 2023
2641cec
terraform-docs: automated action
github-actions[bot] Nov 30, 2023
2b4459d
public ips
maxi418 Nov 30, 2023
8d52a6b
removed debugging output
maxi418 Nov 30, 2023
a9bdf71
terraform-docs: automated action
github-actions[bot] Nov 30, 2023
d59722a
added outputs for legacy compatibility
maxi418 Nov 30, 2023
702de8c
Merge branch 'DBP-333-Expand-the-terraform-module-for-autoscaling-fun…
maxi418 Nov 30, 2023
6627a74
terraform-docs: automated action
github-actions[bot] Nov 30, 2023
4a46930
cleanup
maxi418 Nov 30, 2023
d354edd
Merge branch 'DBP-333-Expand-the-terraform-module-for-autoscaling-fun…
maxi418 Nov 30, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 6 additions & 7 deletions modules/ionos-k8s-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,11 @@ No modules.
| <a name="input_allow_node_pool_replacement"></a> [allow\_node\_pool\_replacement](#input\_allow\_node\_pool\_replacement) | When set to true, allows the update of immutable fields by first destroying and then re-creating the node pool. | `bool` | `false` | no |
| <a name="input_api_subnet_allow_list"></a> [api\_subnet\_allow\_list](#input\_api\_subnet\_allow\_list) | n/a | `list(string)` | `null` | no |
| <a name="input_associated_lans"></a> [associated\_lans](#input\_associated\_lans) | The lans as objects in a list [{lan[0] with id and routes\_list, lan[1] with id and routes\_list}, ...] | <pre>list(object({<br> id = number<br> routes_list = list(any)<br> }))</pre> | `[]` | no |
| <a name="input_availability_zone"></a> [availability\_zone](#input\_availability\_zone) | n/a | `string` | `"ZONE_1"` | no |
| <a name="input_availability_zone"></a> [availability\_zone](#input\_availability\_zone) | Not needed anymore, we work with a list of zones now | `string` | `"ZONE_1"` | no |
| <a name="input_cpu_family"></a> [cpu\_family](#input\_cpu\_family) | Valid cpu family | `string` | `"INTEL_SKYLAKE"` | no |
| <a name="input_create_public_ip_pools"></a> [create\_public\_ip\_pools](#input\_create\_public\_ip\_pools) | n/a | `bool` | `false` | no |
| <a name="input_custom_nodepools"></a> [custom\_nodepools](#input\_custom\_nodepools) | This object describes nodepool configurations for dynamic creation of nodepools with a specific purpose and resources. | <pre>list(object({<br> name = string<br> auto_scaling = optional(bool, false)<br> node_count = number<br> nodepool_per_zone_count = number<br> min_node_count= optional(number, null)<br> max_node_count= optional(number, null)<br> ram_size = number<br> core_count = number<br> purpose = string<br> availability_zones = list(string)<br> allow_node_pool_replacement = bool<br> associated_lans = list(object({<br> id = number<br> routes_list = list(any)<br> }))<br> maintenance_day = string<br> maintenance_hour = number<br> storage_type = string<br> storage_size = number<br> cpu_family = string<br> create_public_ip_pools = bool<br> public_ips = map(list(list(string)))<br> })<br> )</pre> | <pre>[<br> {<br> "allow_node_pool_replacement": null,<br> "associated_lans": null,<br> "auto_scaling": false,<br> "availability_zones": [<br> "ZONE_1",<br> "ZONE_2"<br> ],<br> "core_count": null,<br> "cpu_family": null,<br> "create_public_ip_pools": null,<br> "maintenance_day": null,<br> "maintenance_hour": null,<br> "max_node_count": null,<br> "min_node_count": null,<br> "name": "Legacy",<br> "node_count": null,<br> "nodepool_per_zone_count": null,<br> "public_ips": {<br> "ZONE_1": [<br> [<br> ""<br> ]<br> ],<br> "ZONE_2": [<br> [<br> ""<br> ]<br> ]<br> },<br> "purpose": "legacy",<br> "ram_size": null,<br> "storage_size": null,<br> "storage_type": null<br> }<br>]</pre> | no |
| <a name="input_enable_legacy_and_scaling"></a> [enable\_legacy\_and\_scaling](#input\_enable\_legacy\_and\_scaling) | Determins if both should be used, otherwise only one will be used where custom\_nodepools overwrite legacy ones | `bool` | `false` | no |
| <a name="input_k8s_version"></a> [k8s\_version](#input\_k8s\_version) | Kubernetes version | `string` | `"1.24.15"` | no |
| <a name="input_maintenance_day"></a> [maintenance\_day](#input\_maintenance\_day) | On which day to do the maintenance | `string` | `"Saturday"` | no |
| <a name="input_maintenance_hour"></a> [maintenance\_hour](#input\_maintenance\_hour) | On which hour to do the maintenance | `number` | `3` | no |
Expand All @@ -41,8 +43,6 @@ No modules.
| <a name="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name) | n/a |
| <a name="output_nodepool_zone1_id"></a> [nodepool\_zone1\_id](#output\_nodepool\_zone1\_id) | n/a |
| <a name="output_nodepool_zone1_ips"></a> [nodepool\_zone1\_ips](#output\_nodepool\_zone1\_ips) | n/a |
| <a name="output_nodepool_zone2_id"></a> [nodepool\_zone2\_id](#output\_nodepool\_zone2\_id) | n/a |
| <a name="output_nodepool_zone2_ips"></a> [nodepool\_zone2\_ips](#output\_nodepool\_zone2\_ips) | n/a |
## Requirements

| Name | Version |
Expand All @@ -53,9 +53,8 @@ No modules.

| Name | Type |
|------|------|
| [ionoscloud_ipblock.ippools_zone1](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/ipblock) | resource |
| [ionoscloud_ipblock.ippools_zone2](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/ipblock) | resource |
| [ionoscloud_ipblock.ippools](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/ipblock) | resource |
| [ionoscloud_k8s_cluster.cluster](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/k8s_cluster) | resource |
| [ionoscloud_k8s_node_pool.nodepool_zone1](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/k8s_node_pool) | resource |
| [ionoscloud_k8s_node_pool.nodepool_zone2](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/k8s_node_pool) | resource |
| [ionoscloud_k8s_node_pool.nodepool_legacy](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/k8s_node_pool) | resource |
| [ionoscloud_k8s_node_pool.nodepool_scaling](https://registry.terraform.io/providers/ionos-cloud/ionoscloud/6.3.6/docs/resources/k8s_node_pool) | resource |
<!-- END_TF_DOCS -->
74 changes: 60 additions & 14 deletions modules/ionos-k8s-cluster/locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,66 @@ locals {
# Valid choices depend on the datacenter location:
# de/txl, de/fra: INTEL_SKYLAKE
cpu_family = var.cpu_family
# Number of nodes per nodepool.
# Note that one nodepool is created in each availability zone.
# Example: With 2 zones, the actual total node count is twice as high as the number stated here.
node_count = var.node_count
# This cannot be changed, after the nodepool is created, because all worker nodes must be equal at any time.
core_count = var.core_count
# This cannot be changed, after the nodepool is created, because all worker nodes must be equal at any time.
ram_size = var.ram_size != null ? var.ram_size : 16384
# The number of nodepools per zone.
nodepool_per_zone_count = var.nodepool_per_zone_count
public_ip_pool_zone1 = var.create_public_ip_pools ? ionoscloud_ipblock.ippools_zone1[*].ips : var.public_ip_pool_zone1
public_ip_pool_zone2 = var.create_public_ip_pools ? ionoscloud_ipblock.ippools_zone2[*].ips : var.public_ip_pool_zone2
maintenance_day = var.maintenance_day
maintenance_hour = var.maintenance_hour

api_subnet_allow_list = var.api_subnet_allow_list

#Create legacy object for possible merging into the nodepool list(Only used when both legacy and custom nodespools are in use)
legacy_object = tolist([{
name = "Legacy"
auto_scaling = false
nodepool_per_zone_count = null
node_count = null
min_node_count= null
max_node_count= null
ram_size = null
core_count = null
purpose = "legacy"
availability_zones = ["ZONE_1", "ZONE_2"]
allow_node_pool_replacement = null
associated_lans = var.associated_lans
maintenance_day = null
maintenance_hour = null
storage_type = null
storage_size = null
cpu_family = null
create_public_ip_pools = null
public_ips = {ZONE_1=[[]], ZONE_2=[[]]}
}])

#check if both legacy and scaling should be used, if so merge legacy object into the object list if needed (default = false)
#if false: No need to do anything because it is either legacy or scaling
#if true: check if first object is legacy, if not only scaling objects are in the list => merge legacy into it
legacy_check = var.enable_legacy_and_scaling == false ? var.custom_nodepools : (var.custom_nodepools[0].purpose != "legacy" ? tolist(concat(var.custom_nodepools, local.legacy_object)) : var.custom_nodepools)

#availability_zone_split duplicates objects with each of their Availability zones once. if [ZONE1, ZONE2] we get 2 objects with one of those zones each.
availability_zone_split = toset(flatten([for n in local.legacy_check : [for x in n.availability_zones : merge(n,{availability_zone = x})] ]))

#Loop through our nodepool list to detect empty values and fill them with legacy values
#Only required for downward compatibility and legacy nodepools (If no downward compatibility is required just use var.custom_nodepools to loop over)
custom_nodepools = [ for np in local.availability_zone_split : {
name = np.name
purpose = np.purpose
auto_scaling = np.auto_scaling
min_node_count = np.min_node_count
max_node_count = np.max_node_count
availability_zone = np.availability_zone
nodepool_per_zone_count = np.nodepool_per_zone_count != null ? np.nodepool_per_zone_count : var.nodepool_per_zone_count
node_count = np.node_count != null ? np.node_count : var.node_count
ram_size = np.ram_size != null ? np.ram_size : var.ram_size
core_count = np.core_count != null ? np.core_count : var.core_count
allow_node_pool_replacement = np.allow_node_pool_replacement != null ? np.allow_node_pool_replacement : var.allow_node_pool_replacement
associated_lans = np.associated_lans != null ? np.associated_lans : var.associated_lans
maintenance_day = np.maintenance_day != null ? np.maintenance_day : var.maintenance_day
maintenance_hour = np.maintenance_hour != null ? np.maintenance_hour : var.maintenance_hour
storage_type = np.storage_type != null ? np.storage_type : var.storage_type
storage_size = np.storage_size != null ? np.storage_size : var.storage_size
cpu_family = np.cpu_family != null ? np.cpu_family : var.cpu_family
create_public_ip_pools = np.create_public_ip_pools != null ? np.create_public_ip_pools : var.create_public_ip_pools
public_ips = np.create_public_ip_pools == true ? [[]] : np.purpose == "legacy" ? (np.availability_zone == "ZONE_1" ? var.public_ip_pool_zone1 : var.public_ip_pool_zone2) : np.public_ips[np.availability_zone]
}
]

#nodepool_per_zone_creator this duplicates the objects in each availability zone to the amount of nodepool_per_zone_count
nodepool_per_zone_creator = toset(flatten([for n in local.custom_nodepools : [for x in range(0, n.nodepool_per_zone_count) : merge(n,{nodepool_index = x})] ]))
}

Loading
Loading