Skip to content
This repository has been archived by the owner on Jan 18, 2019. It is now read-only.

Latest commit

 

History

History
150 lines (114 loc) · 7.53 KB

README.md

File metadata and controls

150 lines (114 loc) · 7.53 KB

Ingress FAQ

This page contains general FAQ for Ingress, there is also a per-backend FAQ in this directory with site specific information.

Table of Contents

How is Ingress different from a Service?

The Kubernetes Service is an abstraction over endpoints (pod-ip:port pairings). The Ingress is an abstraction over Services. This doesn't mean all Ingress controller must route through a Service, but rather, that routing, security and auth configuration is represented in the Ingress resource per Service, and not per pod. As long as this configuration is respected, a given Ingress controller is free to route to the DNS name of a Service, the VIP, a NodePort, or directly to the Service's endpoints.

I created an Ingress and nothing happens, what now?

Run describe on the Ingress. If you see create/add events, you have an Ingress controller running in the cluster, otherwise, you either need to deploy or restart your Ingress controller. If the events associaged with an Ingress are insufficient to debug, consult the controller specific FAQ.

How do I deploy an Ingress controller?

The following platforms currently deploy an Ingress controller addon: GCE, GKE, minikube. If you're running on any other platform, you can deploy an Ingress controller by following this example.

Are Ingress controllers namespaced?

Ingress is namespaced, this means 2 Ingress objects can have the same name in 2 namespaces, and must only point to Services in its own namespace. An admin can deploy an Ingress controller such that it only satisfies Ingress from a given namespace, but by default, controllers will watch the entire kubernetes cluster for unsatisfied Ingress.

How do I disable an Ingress controller?

Either shutdown the controller satisfying the Ingress, or use the Ingress-class annotation, as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - secretName: tls-secret
  backend:
    serviceName: echoheaders-https
    servicePort: 80

Setting the annotation to any value other than "gce" or the empty string, will force the GCE controller to ignore your Ingress. The same applies for the nginx controller.

To completely stop the Ingress controller on GCE/GKE, please see this faq.

How do I run multiple Ingress controllers in the same cluster?

Multiple Ingress controllers can co-exist and key off the ingress-class annotation, as shown in this faq, as well as in this example.

How do I contribute a backend to the generic Ingress controller?

First check the catalog, to make sure you really need to write one.

  1. Write a generic backend
  2. Keep it in your own repo, make sure it passes the conformance suite
  3. Submit an example(s) in the appropriate subdirectories here
  4. Add it to the catalog

Is there a catalog of existing Ingress controllers?

Yes, a non-comprehensive catalog exists.

How are the Ingress controllers tested?

Testing for the Ingress controllers is divided between:

The configuration for jenkins e2e tests are located here. The Ingress E2Es are located here, each controller added to that suite must consistently pass the conformance suite.

An Ingress controller E2E is failing, what should I do?

First, identify the reason for failure.

  • Look at the build log, if there's nothing obvious, search for quota issues.
    • Find events logged by the controller in the build log
    • Ctrl+f "quota" in the build log
  • If the failure is in the GCE controller:
    • Navigate to the test artifacts for that run and look at glbc.log, eg
    • Look up the PROJECT= line in the build log, and navigate to that project looking for quota issues (gcloud compute project-info describe project-name or navigate to the cloud console > compute > quotas)
  • If the failure is for a non-cloud controller (eg: nginx)
    • Make sure the firewall rules required by the controller are opened on the right ports (80/443), since the jenkins builders run outside the Kubernetes cluster.

Note that you currently need help from a test-infra maintainer to access the GCE test project. If you think the failures are related to project quota, cleanup leaked resources and bump up quota before debugging the leak.

If the preceding identification process fails, it's likely that the Ingress api is broked upstream. Try to setup a dev environment from HEAD and create an Ingress. You should be deploying the latest release image to the local cluster.

If neither of these 2 strategies produces anything useful, you can either start reverting images, or digging into the underlying infrastructure the e2es are running on for more nefarious issues (like permission and scope changes for some set of nodes on which an Ingress controller is running).

Is there a roadmap for Ingress features?

The community is working on it. There are currently too many efforts in flight to serialize into a flat roadmap. You might be interested in the following issues:

As well as the issues in this repo.