A local Kubernetes setup using Kind with Podman and Cilium, Flagger and Flux as template for your (and mine) canary deployment experiments.
- Create a new repository from this template into your own profile.
- Clone it locally and step into its directory.
- Use
make
for preparing your workstation and preparing the playground.
Please have a look to the top section of the Makefile
to change its configuration for adopting it to your needs.
The included Makefile
supports the following tasks:
-
make prepare
: Downloads thekubectl
,kind
,flux
andcilium
clis to~/.fks
for later use. It's doesn't manipulate any of your regular local setup. -
make pre-check
: Validates the required setup: podman, kubectl, kind and flux. -
make new
: Creates a new k8s kind cluster with cilium instead of kube-proxy and points the local kube context to it. -
make bootstrap
: Bootstraps flux to the local cluster, targeting the (this) GitHub repository you're using and the current checked out branch.GITHUB_TOKEN
env needs to be configured with a repo-scoped GitHub personal access token -
make check
: Runs automatically before bootstrap, validating the kube context and printing the targeted GitHub repository. -
make wait
: Blocks til reconciliation finished and the cluster is ready to use. -
make reconcile
: Triggers theflux reconcile
for the root kustomizationflux-system
. -
make kube-ctx
: Configures local kubectl context (done bymake new
, just in case it changed) -
make clean
: Removes the local kind cluster.
- Kind config binding to host ports 8080, 8088 and 8443, so please free up those ports or change the config
- Basic GitOps setup with
- kubernetes-dashboard accessible via https:// from some IP allocated by the load balancer, check
kubectl get service
for the IP address. - Prometheus and Grafana via
http://IP:8080/[prometheus|grafana]
at another IP allocated by the load balancer. - Cilium UI accessible at
http://localhost:8088
. - An apps folder targeted by a respective kustomization, with an example application that gets deployed in a canary way.
While kubectl
, kind
, cilium
and flux
are managed with this repository (for version compatibility of everything in here), your local setup has to fulfill the following:
podman
as container runtimemake
for orchestrating the setupcurl
for downloading the managed clis- some common tools used by the Makefile:
jq
cut
awk
sed
which
tar
(withgzip
support)
For exposing loadbalancer services on virtual IPs, MetalLB is used. The make new
command inspects the podman/kind network and configures the appropiate subnet in its IPAddressPool.
On any issues, please check that the auto detected subnet matches your configuration and adjust it, if not.
Play with Flagger and watch it play.
Bring it to life (after forking and cloning this repo) using:
make prepare
make new
make bootstrap
- Get host mapped load balancer IPs:
kubectl get service
(EXTERNAL_IP, e.g. 10.89.0.240 for port 80; 10.89.0.241 for port 443) - Plain resources deployed Examiner: http://10.89.0.240/examine-plain
- Helm deployed Examiner: http://10.89.0.240/examine-helm
- Cilium Hubble UI (exposed directly to host): http://localhost:8088
- Kubernetes Dashboard: https://10.89.0.241/ ("skip login")
- Prometheus: http://10.89.0.240/prometheus
- Grafana: http://10.89.0.240/grafana
- To switch between own HttpRoute config and Flagger managed routing, comment in/out route.yaml and canary.yaml here.
- produce some traffic:
hey -z 60m http://10.89.0.240/examine-plain
- watch it in Grafana
- and Hubble UI
- Do some changes (increase error rate) by updating its config,
make reconcile
to sync it andkubectl rollout restart deployment -n plain examiner
to roll it out - Watch it fail immediately
Ok, first let's repair it by reverting the config change.
-
Activate canary by commenting routes.yaml out and canary.yaml in
-
Check what happend in Kubernetes Dashboard or by
kubectl get all -n plain
-
Check HttpRoute
kubectl get httproute -n plain -o yaml
and Flagger logs -
Make traffic again
hey -z 60m http://10.89.0.240/examine-plain
-
Check Hubble UI and Grafana
-
Make it fail again with server errors
-
No need for manual restart - check Flagger logs
-
Same for latency
Please don't hesitate to file any issues or propose enhancements to this repo.