Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gce: Switch from using targetpools to backend services #16233

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

upodroid
Copy link
Member

@upodroid upodroid commented Jan 8, 2024

Google recommends creating NLBs using Backend Services instead of TargetPools to take advantage of newer features

https://cloud.google.com/load-balancing/docs/network/networklb-target-pools

Also, this change "almost" supports global LBs(missing TCP target proxy resource)
Google splits some services in to regional vs global with identical object types. I fixed the methods to detect if a region is being supplied.

Why:

  1. I want to try and see if global TCP Proxy LBs don't trigger ddos protection when running scale tests.
  2. If Ddos protection kicks in, we can write a cloud-armor policy that allows 1k rps(10k reqs per 10s) from any IP.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 8, 2024
@k8s-ci-robot k8s-ci-robot added area/api area/provider/gcp Issues or PRs related to gcp provider size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 8, 2024
@hakman hakman changed the title Switch from using targetpools to backend services gce: Switch from using targetpools to backend services Jan 8, 2024
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 8, 2024
@@ -224,9 +224,6 @@ func validateClusterSpec(spec *kops.ClusterSpec, c *kops.Cluster, fieldPath *fie
lbSpec := spec.API.LoadBalancer
lbPath := fieldPath.Child("api", "loadBalancer")
if spec.GetCloudProvider() != kops.CloudProviderAWS {
if lbSpec.Class != "" {
allErrs = append(allErrs, field.Forbidden(lbPath.Child("class"), "class is only supported on AWS"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should ensure the field is only used on AWS and GCP clusters, and with the valid subset of values for each cloud provider.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 13, 2024
@hakman
Copy link
Member

hakman commented Jan 20, 2024

/cc @justinsb
/assign @justinsb

@justinsb
Copy link
Member

Thanks for this @upodroid ... it looks like we do use backend services with internal load balancers. I am proposing that we have internal load balancers for both api & kops-controller in all circumstances, so we should be using backend services (my motivation was the firewall rule bug).

I think the issue you hit was on the node/pod -> apiserver traffic maybe getting rate limited, so it might be good to validate that if/when we make that switch, that the rate limiting goes away.

That said, I don't oppose the idea of using backend services on the "user-facing" traffic also - the IPv6 support seems compelling in particular! I think we should sequence this after the better internal LB support though, do you agree?

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from justinsb. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@upodroid
Copy link
Member Author

That said, I don't oppose the idea of using backend services on the "user-facing" traffic also - the IPv6 support seems compelling in particular! I think we should sequence this after the better internal LB support though, do you agree?

Yes

I need to split this PR into smaller pieces

  1. The gce client changes to support regional/global resources with a unified function
  2. The LB changes

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Feb 20, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 25, 2024
@hakman
Copy link
Member

hakman commented May 25, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 25, 2024
@upodroid
Copy link
Member Author

Hmm, I'll get the merge conflicts fixed and ship my open PRs in early June

@k8s-ci-robot
Copy link
Contributor

@upodroid: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kops-verify-hashes bc4f045 link true /test pull-kops-verify-hashes
pull-kops-build 5c91426 link true /test pull-kops-build
pull-kops-verify-govet 5c91426 link true /test pull-kops-verify-govet
pull-kops-test 5c91426 link true /test pull-kops-test
pull-kops-e2e-k8s-gce-cilium 5c91426 link true /test pull-kops-e2e-k8s-gce-cilium
pull-kops-e2e-k8s-gce-ipalias 5c91426 link true /test pull-kops-e2e-k8s-gce-ipalias
pull-kops-verify-golangci-lint 5c91426 link true /test pull-kops-verify-golangci-lint
pull-kops-e2e-k8s-aws-calico 5c91426 link true /test pull-kops-e2e-k8s-aws-calico
pull-kops-verify-terraform 5c91426 link true /test pull-kops-verify-terraform
pull-kops-e2e-k8s-aws-calico-k8s-infra 5c91426 link true /test pull-kops-e2e-k8s-aws-calico-k8s-infra

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api area/provider/gcp Issues or PRs related to gcp provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/office-hours lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants