Skip to content

Commit

Permalink
Add deps back in:
Browse files Browse the repository at this point in the history
deps in conjuntion with `run: once` allows
deps to not run so many times as before.
Update readme and add one time output of
next steps.

Signed-off-by: Jacob Weinstock <[email protected]>
  • Loading branch information
jacobweinstock committed Jun 10, 2024
1 parent 73e1b8b commit 8ebe15b
Show file tree
Hide file tree
Showing 5 changed files with 128 additions and 25 deletions.
35 changes: 34 additions & 1 deletion playground/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,41 @@
# CAPT Playground

The CAPT playground is a tool that will create a local CAPT deployment. This includes a Kubernetes cluster (KinD), the Tinkerbell stack, all CAPI and CAPT components, Virtual machines that will be used to create a workload cluster, and a Virtual BMC to manage the VMs.

Start by reviewing and installing the [prerequisites] and understanding and customizing the [configuration](config.yaml) as needed.

## Prerequisites

### Binaries

* Libvirtd >= libvirtd (libvirt) 8.0.0
* Docker >= 24.0.7
* Helm >= v3.13.1
* KinD >= v0.20.0
* clusterctl >= v1.6.0
* kubectl >= v1.28.2
* virt-install >= 4.0.0
* task >= 3.37.2

### Hardware

* at least 60GB of free and very fast disk space (etcd is very disk io sensitive)
* at least 8GB of free RAM
* at least 4 CPU cores

## Usage

The CAPT playground can be run as a standalone binary or via Docker.
Create the CAPT playground:

```bash
task create-playground
```

Delete the CAPT playground:

```bash
task delete-playground
```

### Standalone

Expand Down
41 changes: 41 additions & 0 deletions playground/Taskfile.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ tasks:
- task: ensure-output-dir
- task: generate-state
- task: create:playground-ordered
- task: next-steps

delete-playground:
silent: true
Expand Down Expand Up @@ -60,8 +61,10 @@ tasks:
Create the output directory.
cmds:
- mkdir -p {{.OUTPUT_DIR}}
- mkdir -p {{.OUTPUT_DIR}}/xdg
status:
- echo ;[ -d {{.OUTPUT_DIR}} ]
- echo ;[ -d {{.OUTPUT_DIR}}/xdg ]

generate-state:
summary: |
Expand All @@ -72,3 +75,41 @@ tasks:
- .state
cmds:
- ./scripts/generate_state.sh config.yaml .state

next-steps:
silent: true
summary: |
Next steps after creating the CAPT playground.
vars:
NAMESPACE:
sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}}
NODE_BASE:
sh: yq eval '.vm.baseName' {{.STATE_FILE_FQ_PATH}}
CLUSTER_NAME:
sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}}
KIND_KUBECONFIG:
sh: yq eval '.kind.kubeconfig' {{.STATE_FILE_FQ_PATH}}
cmds:
- |
echo
echo The workload cluster is now being created.
echo Once the cluster nodes are up and running, you will need to deploy a CNI for the cluster to be fully functional.
echo
echo 1. Watch and wait for the first control plane node to be provisioned successfully: STATE_SUCCESS
echo "KUBECONFIG={{.KIND_KUBECONFIG}} kubectl get workflows -n {{.NAMESPACE}} -w"
echo
echo
echo 2. Watch and wait for the Kubernetes API server to be ready and responding:
echo "until KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl get node; do echo 'Waiting for Kube API server to respond...'; sleep 3; done"
echo
echo 3. Deploy a CNI
echo Cilium
echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig cilium install"
echo or KUBEROUTER
echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml"
echo
echo 4. Watch and wait for all nodes to join the cluster and be ready:
echo "KUBECONFIG={{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig kubectl get nodes -w"
- touch {{.OUTPUT_DIR}}/.next-steps-displayed
status:
- echo ;[ -f {{.OUTPUT_DIR}}/.next-steps-displayed ]
45 changes: 22 additions & 23 deletions playground/tasks/Taskfile-capi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,14 @@ tasks:
summary: |
CAPI tasks run in order of dependency.
cmds:
- task: validate-deps
- task: create-cluster-yaml
- task: init
- task: generate-cluster-yaml
- task: create-kustomize-file
- task: apply-kustomization

validate-deps:
summary: |
Validate required dependencies for virtual bmc tasks.
silent: true
cmds:
- for: ['clusterctl', "kubectl"]
cmd: command -v {{.ITEM}} >/dev/null || echo "'{{.ITEM}}' was not found in the \$PATH, please ensure it is installed."

create-cluster-yaml:
run: once
summary: |
Create the cluster yaml.
env:
Expand All @@ -36,19 +28,21 @@ tasks:
- grep -q "$CAPT_VERSION" {{.OUTPUT_DIR}}/clusterctl.yaml

init:
run: once
deps: [create-cluster-yaml]
summary: |
Initialize the cluster.
env:
TINKERBELL_IP:
sh: yq eval '.tinkerbell.vip' {{.STATE_FILE_FQ_PATH}}
CLUSTERCTL_DISABLE_VERSIONCHECK: true
XDG_CONFIG_HOME: /tmp/xdg
XDG_CONFIG_DIRS: /tmp/xdg
XDG_STATE_HOME: /tmp/xdg
XDG_CACHE_HOME: /tmp/xdg
XDG_RUNTIME_DIR: /tmp/xdg
XDG_DATA_HOME: /tmp/xdg
XDG_DATA_DIRS: /tmp/xdg
XDG_CONFIG_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_CONFIG_DIRS: "{{.OUTPUT_DIR}}/xdg"
XDG_STATE_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_CACHE_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_RUNTIME_DIR: "{{.OUTPUT_DIR}}/xdg"
XDG_DATA_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_DATA_DIRS: "{{.OUTPUT_DIR}}/xdg"
vars:
OUTPUT_DIR:
sh: echo $(yq eval '.outputDir' config.yaml)
Expand All @@ -62,6 +56,8 @@ tasks:
- expected=1; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get pods -n capt-system |grep -ce "capt-controller"); [[ "$got" == "$expected" ]]

generate-cluster-yaml:
run: once
deps: [init]
summary: |
Generate the cluster yaml.
env:
Expand All @@ -70,13 +66,13 @@ tasks:
POD_CIDR:
sh: yq eval '.cluster.podCIDR' {{.STATE_FILE_FQ_PATH}}
CLUSTERCTL_DISABLE_VERSIONCHECK: true
XDG_CONFIG_HOME: /tmp/xdg
XDG_CONFIG_DIRS: /tmp/xdg
XDG_STATE_HOME: /tmp/xdg
XDG_CACHE_HOME: /tmp/xdg
XDG_RUNTIME_DIR: /tmp/xdg
XDG_DATA_HOME: /tmp/xdg
XDG_DATA_DIRS: /tmp/xdg
XDG_CONFIG_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_CONFIG_DIRS: "{{.OUTPUT_DIR}}/xdg"
XDG_STATE_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_CACHE_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_RUNTIME_DIR: "{{.OUTPUT_DIR}}/xdg"
XDG_DATA_HOME: "{{.OUTPUT_DIR}}/xdg"
XDG_DATA_DIRS: "{{.OUTPUT_DIR}}/xdg"
vars:
CLUSTER_NAME:
sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}}
Expand All @@ -98,6 +94,7 @@ tasks:
- grep -q "{{.KUBE_VERSION}}" {{.OUTPUT_DIR}}/prekustomization.yaml

create-kustomize-file:
run: once
summary: |
Kustomize file for the CAPI generated config file (prekustomization.yaml).
env:
Expand Down Expand Up @@ -128,6 +125,8 @@ tasks:
- envsubst "$(printf '${%s} ' $(env | cut -d'=' -f1))" < templates/kustomization.tmpl > {{.OUTPUT_DIR}}/kustomization.yaml

apply-kustomization:
run: once
deps: [generate-cluster-yaml, create-kustomize-file]
summary: |
Kustomize the cluster yaml.
vars:
Expand Down
27 changes: 26 additions & 1 deletion playground/tasks/Taskfile-create.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ tasks:
- task: get-workload-cluster-kubeconfig

kind-cluster:
run: once
summary: |
Install a KinD cluster.
vars:
Expand All @@ -43,12 +44,16 @@ tasks:

update-state:
silent: true
run: once
deps: [kind-cluster]
summary: |
Update the state file with the KinD cluster information. Should be run only after the KinD cluster is created.
cmds:
- ./scripts/update_state.sh "{{.STATE_FILE_FQ_PATH}}"

hardware-cr:
run: once
deps: [update-state]
summary: |
Create BMC Machine object.
sources:
Expand All @@ -59,6 +64,8 @@ tasks:
- ./scripts/generate_hardware.sh {{.STATE_FILE_FQ_PATH}}

bmc-machine-cr:
run: once
deps: [vbmc:update-state]
summary: |
Create BMC Machine objects.
sources:
Expand All @@ -69,6 +76,8 @@ tasks:
- ./scripts/generate_bmc.sh {{.STATE_FILE_FQ_PATH}}

bmc-secret:
run: once
deps: [update-state]
summary: |
Create BMC Machine objects.
sources:
Expand All @@ -79,6 +88,8 @@ tasks:
- ./scripts/generate_secret.sh {{.STATE_FILE_FQ_PATH}}

deploy-tinkerbell-helm-chart:
run: once
deps: [kind-cluster, update-state]
summary: |
Deploy the Tinkerbell Helm chart.
vars:
Expand All @@ -99,6 +110,8 @@ tasks:
- KUBECONFIG="{{.KUBECONFIG}}" helm list -n {{.NAMESPACE}} | grep -q {{.CHART_NAME}}

vms:
run: once
deps: [update-state, vbmc:update-state]
summary: |
Create Libvirt VMs.
vars:
Expand All @@ -112,6 +125,8 @@ tasks:
- expected={{.TOTAL_HARDWARE}}; got=$(virsh --connect qemu:///system list --all --name |grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]]

apply-bmc-secret:
run: once
deps: [kind-cluster, bmc-secret]
summary: |
Apply the BMC secret.
vars:
Expand All @@ -125,6 +140,8 @@ tasks:
- KUBECONFIG="{{.KUBECONFIG}}" kubectl get secret bmc-creds -n {{.NAMESPACE}}

apply-bmc-machines:
run: once
deps: [kind-cluster, bmc-machine-cr]
summary: |
Apply the BMC machines.
vars:
Expand All @@ -145,6 +162,8 @@ tasks:
- expected={{.TOTAL_HARDWARE}}; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get machines.bmc -n {{.NAMESPACE}} | grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]]

apply-hardware:
run: once
deps: [kind-cluster, hardware-cr]
summary: |
Apply the hardware.
vars:
Expand All @@ -165,6 +184,8 @@ tasks:
- expected={{.TOTAL_HARDWARE}}; got=$(KUBECONFIG="{{.KUBECONFIG}}" kubectl get hardware -n {{.NAMESPACE}} | grep -ce "{{.VM_BASE_NAME}}*"); [[ "$got" == "$expected" ]]

create-workload-cluster:
run: once
deps: [kind-cluster, capi:ordered]
summary: |
Create the workload cluster by applying the generated manifest file.
vars:
Expand All @@ -175,11 +196,14 @@ tasks:
NAMESPACE:
sh: yq eval '.namespace' {{.STATE_FILE_FQ_PATH}}
cmds:
- KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml
- until KUBECONFIG="{{.KUBECONFIG}}" kubectl apply -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.yaml 2>&1/dev/null; do echo "Trying kubectl apply again..."; sleep 1; done
- echo "Workload manifest applied to cluster."
status:
- KUBECONFIG="{{.KUBECONFIG}}" kubectl get -n {{.NAMESPACE}} cluster {{.CLUSTER_NAME}}

get-workload-cluster-kubeconfig:
run: once
deps: [create-workload-cluster]
summary: |
Get the workload cluster's kubeconfig.
vars:
Expand All @@ -191,5 +215,6 @@ tasks:
sh: yq eval '.clusterName' {{.STATE_FILE_FQ_PATH}}
cmds:
- until KUBECONFIG="{{.KUBECONFIG}}" clusterctl get kubeconfig -n {{.NAMESPACE}} {{.CLUSTER_NAME}} > {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig; do echo "Waiting for workload cluster kubeconfig to be available..."; sleep 1; done
- echo "Workload cluster kubeconfig saved to {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig."
status:
- echo ; [ -f {{.OUTPUT_DIR}}/{{.CLUSTER_NAME}}.kubeconfig ]
5 changes: 5 additions & 0 deletions playground/tasks/Taskfile-vbmc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ version: '3'
tasks:

start-server:
run: once
summary: |
Start the virtualbmc server. Requires the "kind" docker network to exist.
vars:
Expand All @@ -16,6 +17,8 @@ tasks:
- docker ps | grep -q {{.VBMC_CONTAINER_NAME}}

start-vbmcs:
run: once
deps: [start-server]
summary: |
Register and start the virtualbmc servers. Requires that the virtual machines exist.
vars:
Expand All @@ -27,6 +30,8 @@ tasks:
- expected=$(yq e '.totalNodes' {{.STATE_FILE_FQ_PATH}}); got=$(docker exec {{.VBMC_NAME}} vbmc list | grep -c "running" || :); [[ "$got" == "$expected" ]]

update-state:
run: once
deps: [start-server]
summary: |
Update the state file with the virtual bmc server information.
vars:
Expand Down

0 comments on commit 8ebe15b

Please sign in to comment.