enable horizontal autoscaler support, via global ipam for vm ips and vips (also, gitops) #11313
Labels
kind/feature
Categorizes issue or PR as related to a new feature.
needs-priority
Indicates an issue lacks a `priority/foo` label and requires one.
needs-triage
Indicates an issue or PR lacks a `triage/foo` label and requires one.
What would you like to be added (User Story)?
Whether experimenting in a home lab, or a production environment, configuring horizontal scaling of nodes is standard practice, as import as specifying resources in a deployment.yaml, and automatic pod horizontal scaling.
Detailed Description
Typically, I've noticed providers requiring a vip and ips to use with vms. This doesn't scale.
Imagine clusters automatically adding additional worker nodes, suddenly an ip is needed, later the cluster scales down and the ip is freed up. Specifying a range of ips at the cluster.yaml level isn't efficient, the available ips should be shared.
Similarly, imagine spinning up a cluster in a pipeline to test something, then delete the cluster, setting the vip should be automatic.
A solution for this is dhcp of course, but using ipam works also, only that there needs to be a "global pool of ips" for vms, rather than specifying ips at the cluster.yaml level.
Similar to vm ips, configuring vips could also come from a "global pool of vips" using ipam. The vips can't use dhcp, so this solution would need to use ipam.
I imagine a document defining this ability as part of designing a provider explaining to those implementing a provider that by implementing a pool of ips for vms and vips that the users will then be able to use cluster autoscaling.
I could then put in a feature request with my preferred provider to implement the feature or implement it myself following the defined standard.
Anything else you would like to add?
I imagine a provider kubernetes controller catching cluster create and delete. At create it uses ipam to get an available vip and ips for node vms and puts those in the cluster resource, then everything proceeds normally. At delete the ips are reclaimed. A reconciliation loop reclaims ips as needed avoiding race conditions.
I could implement this as a separate controller or as part of my current provider, but this seems like it should be defined at a higher level so that eventually all providers might implement.
Should be possible to adjust / add additional ip ranges any time. I think ipam already supports this.
NODE_IP_RANGES
VIP_IP_RANGES
NODE_AND_VIP_IP_RANGES
The standard could define things like:
Label(s) to be applied
/kind feature
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: