Skip to content

Commit

Permalink
Enhance helper Pod interface and configuration.
Browse files Browse the repository at this point in the history
* Make the helper Pod receive only environment variables instead of args (this changes the interface in a non-backward compatible way but is simpler to use and potentially provides more backward compatibility in the future).
* Adds the manager options `--pvc-annotation[-required]` to pass through annotations from the PVC to the PV and to the helper Pod.
* Merge the helper Pod's `data` VolumeMount with the one provided with the template to be able to specify `mountPropagation` within the template.
* Rename `helperPod.yaml` to `helper-pod.yaml` (more convenient and if we break sth we can break this as well).
* Expose `--helper-pod-timeout` option.
* Provide a basic usage example of the new features (`examples/cache`).
* Support forceful termination of the manager binary (2xCtrl+c - since this is annoying during development otherwise).

Closes rancher#164
Closes rancher#165

Signed-off-by: Max Goltzsche <[email protected]>
  • Loading branch information
mgoltzsche committed Dec 30, 2020
1 parent d253f2b commit 9818353
Show file tree
Hide file tree
Showing 28 changed files with 675 additions and 478 deletions.
1 change: 1 addition & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
./.dapper
./.cache
./dist
./examples/cache/testmount
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,5 @@
*.swp
.idea
.vscode/
local-path-provisioner
/examples/cache/testmount
62 changes: 21 additions & 41 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ Now you've verified that the provisioner works as expected.

### Customize the ConfigMap

The configuration of the provisioner is a json file `config.json` and two bash scripts `setup` and `teardown`, stored in the a config map, e.g.:
The configuration of the provisioner is a json file `config.json`, a Pod template `helper-pod.yaml` and two bash scripts `setup` and `teardown`, e.g.:
```
kind: ConfigMap
apiVersion: v1
Expand All @@ -132,41 +132,11 @@ data:
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
rm -rf "$VOL_DIR"
helper-pod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
Expand Down Expand Up @@ -195,16 +165,26 @@ The configuration must obey following rules:
3. No duplicate paths allowed for one node.
4. No duplicate node allowed.

#### Scripts `setup` and `teardown` and `helperPod.yaml`
#### Scripts `setup` and `teardown` and the `helper-pod.yaml` template

The script `setup` will be executed before the volume is created, to prepare the directory on the node for the volume.
* The `setup` script is run before the volume is created, to prepare the volume directory on the node.
* The `teardown` script is run after the volume is deleted, to cleanup the volume directory on the node.
* The `helper-pod.yaml` template is used to create a helper Pod that runs the `setup` or `teardown` script.

The script `teardown` will be executed after the volume is deleted, to cleanup the directory on the node for the volume.
The scripts receive their input as environment variables:

The yaml file `helperPod.yaml` will be created by local-path-storage to execute `setup` or `teardown` script with three paramemters `-p <path> -s <size> -m <mode>` :
* path: the absolute path provisioned on the node
- size: pvc.Spec.resources.requests.storage in bytes
* mode: pvc.Spec.VolumeMode
| Environment variable | Description |
| -------------------- | ----------- |
| `VOL_DIR` | Volume directory that should be created or removed. |
| `VOL_NAME` | Name of the PersistentVolume. |
| `VOL_TYPE` | Type of the PersistentVolume (`Block` or `Filesystem`). |
| `VOL_SIZE_BYTES` | Requested volume size in bytes. |
| `PVC_NAME` | Name of the PersistentVolumeClaim. |
| `PVC_NAMESPACE` | Namespace of the PersistentVolumeClaim. |
| `PVC_ANNOTATION` | Value of the PersistentVolumeClaim annotation specified by the manager's `--pvc-annotation` option. |
| `PVC_ANNOTATION_{SUFFIX}` | Value of the PersistentVolumeClaim annotation with the prefix specified by the manager's `--pvc-annotation` option. The `SUFFIX` is the normalized path within the annotation name after the `/`. E.g. if `local-path-provisioner` is run with `--pvc-annotation=storage.example.org` the PVC annotation `storage.example.org/cache-name` is passed through to the Pod as env var `PVC_ANNOTATION_CACHE_NAME`. If the helper Pod requires such an annotation `local-path-provisioner` can be run with e.g. `--pvc-annotation-required=storage.example.org/cache-name`. |

Additional environment variables and defaults for the optional `PVC_ANNOTATION*` can be specified within the helper Pod template.

#### Reloading

Expand Down
36 changes: 3 additions & 33 deletions debug/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,41 +39,11 @@ data:
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
rm -rf "$VOL_DIR"
helper-pod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
Expand Down
2 changes: 1 addition & 1 deletion deploy/chart/templates/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,6 @@ data:
{{ .Values.configmap.setup | nindent 4 }}
teardown: |-
{{ .Values.configmap.teardown | nindent 4 }}
helperPod.yaml: |-
helper-pod.yaml: |-
{{ .Values.configmap.helperPod | nindent 4 }}
35 changes: 2 additions & 33 deletions deploy/chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -93,41 +93,10 @@ configmap:
# specify the custom script for setup and teardown
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
# specify the custom helper pod yaml
rm -rf "$VOL_DIR"
helperPod: |-
apiVersion: v1
kind: Pod
Expand Down
38 changes: 3 additions & 35 deletions deploy/example-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,41 +23,11 @@ data:
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
rm -rf "$VOL_DIR"
helper-pod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
Expand All @@ -66,5 +36,3 @@ data:
containers:
- name: helper-pod
image: busybox
36 changes: 3 additions & 33 deletions deploy/local-path-storage.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -104,41 +104,11 @@ data:
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
helperPod.yaml: |-
rm -rf "$VOL_DIR"
helper-pod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
Expand Down
56 changes: 56 additions & 0 deletions examples/cache/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Example cache provisioner

This example shows how to use short-lived PersistentVolumes for caching.
A [buildah](https://github.com/containers/buildah)-based helper Pod is used to provision a container file system based on an image as PersistentVolume and commit it when deprovisioning.
Users can select the desired cache or rather the image using a PersistentVolumeClaim annotation that is passed through to the helper Pod as environment variable.

While it is not part of this example caches could also be synchronized across nodes using an image registry.
The [cache-provisioner](https://github.com/mgoltzsche/cache-provisioner) project aims to achieve this as well as other cache management features.

## Test

### Test the helper Pod separately

The helper Pod can be tested separately using docker locally:
```sh
./helper-test.sh
```

### Test the integration

_Please note that a non-overlayfs storage directory (`/data/example-cache-storage`) must be configured._
_The provided configuration is known to work with minikube (`minikube start`) and kind (`kind create cluster; kind export kubeconfig`)._

Install the example kustomization:
```sh
kustomize build . | kubectl apply -f -
```

If you want to test changes to the `local-path-provisioner` binary locally:
```sh
kubectl delete -n example-cache-storage deploy local-path-provisioner
(
cd ../..
go build .
./local-path-provisioner --debug start \
--namespace=example-cache-storage \
--configmap-name=local-path-config \
--service-account-name=local-path-provisioner-service-account \
--provisioner-name=storage.example.org/cache \
--pvc-annotation=storage.example.org \
--pvc-annotation-required=storage.example.org/cache-name
)
```

Within another terminal create an example Pod and PVC that pulls and runs a container image using [podman](https://github.com/containers/podman):
```sh
kubectl apply -f test-pod.yaml
kubectl logs -f cached-build
```

If the Pod and PVC are removed and recreated you can observe that, during the 2nd Pod execution on the same node, the image for the nested container doesn't need to be pulled again since it is cached:
```sh
kubectl delete -f test-pod.yaml
kubectl apply -f test-pod.yaml
kubectl logs -f cached-build
```
16 changes: 16 additions & 0 deletions examples/cache/config/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"nodePathMap": [
{
"node": "DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths": ["/data/example-cache-storage"]
},
{
"node": "minikube",
"paths": ["/data/example-cache-storage"]
},
{
"node": "kind-control-plane",
"paths": ["/var/opt/example-cache-storage"]
}
]
}
15 changes: 15 additions & 0 deletions examples/cache/config/helper-pod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper
image: quay.io/buildah/stable:v1.17.0
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
hostPID: true
volumeMounts:
- name: data
mountPropagation: Bidirectional
Loading

0 comments on commit 9818353

Please sign in to comment.