Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable horizontal autoscaler support, via global ipam for vm ips and vips (also, gitops) #11313

Closed
lknite opened this issue Oct 20, 2024 · 3 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@lknite
Copy link

lknite commented Oct 20, 2024

What would you like to be added (User Story)?

Whether experimenting in a home lab, or a production environment, configuring horizontal scaling of nodes is standard practice, as import as specifying resources in a deployment.yaml, and automatic pod horizontal scaling.

Detailed Description

Typically, I've noticed providers requiring a vip and ips to use with vms. This doesn't scale.

Imagine clusters automatically adding additional worker nodes, suddenly an ip is needed, later the cluster scales down and the ip is freed up. Specifying a range of ips at the cluster.yaml level isn't efficient, the available ips should be shared.

Similarly, imagine spinning up a cluster in a pipeline to test something, then delete the cluster, setting the vip should be automatic.

A solution for this is dhcp of course, but using ipam works also, only that there needs to be a "global pool of ips" for vms, rather than specifying ips at the cluster.yaml level.

Similar to vm ips, configuring vips could also come from a "global pool of vips" using ipam. The vips can't use dhcp, so this solution would need to use ipam.

I imagine a document defining this ability as part of designing a provider explaining to those implementing a provider that by implementing a pool of ips for vms and vips that the users will then be able to use cluster autoscaling.

I could then put in a feature request with my preferred provider to implement the feature or implement it myself following the defined standard.

Anything else you would like to add?

I imagine a provider kubernetes controller catching cluster create and delete. At create it uses ipam to get an available vip and ips for node vms and puts those in the cluster resource, then everything proceeds normally. At delete the ips are reclaimed. A reconciliation loop reclaims ips as needed avoiding race conditions.

I could implement this as a separate controller or as part of my current provider, but this seems like it should be defined at a higher level so that eventually all providers might implement.

Should be possible to adjust / add additional ip ranges any time. I think ipam already supports this.

NODE_IP_RANGES
VIP_IP_RANGES

NODE_AND_VIP_IP_RANGES

The standard could define things like:

  • use one or many InClusterIPPool resources per namespaces, the namespace which a cluster is deployed will determine which ips are used
  • if an InClusterIPPool does not exist, a GlobalClusterIPPool will be used
  • if InClusterIPPools exist, but no ips are available the cluster create will fail with event 'no ips available'
  • Ipam ippools must be tagged as such:
    • cluster-api-vm-ip-range
    • cluster-api-vip-range
  • an ippool range may be tagged with both a vm ip label and a vip ip label

Label(s) to be applied

/kind feature
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Oct 20, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If CAPI contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@lknite lknite changed the title enable horizontal autoscaler support, via global ipam for vm ips and vips enable horizontal autoscaler support, via global ipam for vm ips and vips (also, gitops) Oct 20, 2024
@lknite
Copy link
Author

lknite commented Oct 20, 2024

Also added as a feature request via the proxmox provider, though this feels like something that would apply more to all providers.
ionos-cloud/cluster-api-provider-proxmox#304

@lknite
Copy link
Author

lknite commented Oct 22, 2024

I've been thinking about this submission quite a bit and I think its actually two different ideas. I'm going to close this issue and resubmit after a bit as two separate ideas. One will be the 304 idea above but at a higher level, something all providers can use.

@lknite lknite closed this as completed Oct 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

2 participants