NexentaStor product page: https://nexenta.com/products/nexentastor.
This is a development branch, for the most recent stable version see "Supported versions":
NexentaStor 5.1 | NexentaStor 5.2 | |
---|---|---|
Kubernetes 1.13 | 1.1.0 | 1.1.0 |
Kubernetes >=1.14 | 1.2.0 | 1.2.0 |
Kubernetes >=1.14 | master | master |
- Persistence (beyond pod lifetime)
- Dynamic provisioning
- Supported access mode: read/write multiple pods
- Volume snapshot support
- NFS/SMB mount protocols.
- Kubernetes cluster must allow privileged pods, this flag must be set for the API server and the kubelet
(instructions):
--allow-privileged=true
- Required the API server and the kubelet feature gates
(instructions):
--feature-gates=VolumeSnapshotDataSource=true,VolumePVCDataSource=true
- Mount propagation must be enabled, the Docker daemon for the cluster must allow shared mounts (instructions)
- Depends on preferred mount filesystem type, following utilities must be installed on each Kubernetes node:
# for NFS apt install -y rpcbind nfs-common # for SMB apt install -y rpcbind cifs-utils
-
Create NexentaStor dataset for the driver, example:
csiDriverPool/csiDriverDataset
. By default, the driver will create filesystems in this dataset and mount them to use as Kubernetes volumes. -
Clone driver repository
git clone https://github.com/Nexenta/nexentastor-csi-driver.git cd nexentastor-csi-driver git checkout master
-
Edit
deploy/kubernetes/nexentastor-csi-driver-config.yaml
file. Driver configuration example:restIp: https://10.3.3.4:8443,https://10.3.3.5:8443 # [required] NexentaStor REST API endpoint(s) username: admin # [required] NexentaStor REST API username password: p@ssword # [required] NexentaStor REST API password defaultDataset: csiDriverPool/csiDriverDataset # default 'pool/dataset' to use defaultDataIp: 20.20.20.21 # default NexentaStor data IP or HA VIP defaultMountFsType: nfs # default mount fs type [nfs|cifs] defaultMountOptions: noatime # default mount options (mount -o ...) # for CIFS mounts: #defaultMountFsType: cifs # default mount fs type [nfs|cifs] #defaultMountOptions: username=admin,password=Nexenta@1 # username/password must be defined for CIFS
All driver configuration options:
Name Description Required Example restIp
NexentaStor REST API endpoint(s); ,
to separate cluster nodesyes https://10.3.3.4:8443
username
NexentaStor REST API username yes admin
password
NexentaStor REST API password yes p@ssword
defaultDataset
parent dataset for driver's filesystems [pool/dataset] no csiDriverPool/csiDriverDataset
defaultDataIp
NexentaStor data IP or HA VIP for mounting shares yes for PV 20.20.20.21
defaultMountFsType
mount filesystem type nfs, cifs no cifs
defaultMountOptions
NFS/CIFS mount options: mount -o ...
(default: "")no NFS: noatime,nosuid
CIFS:username=admin,password=123
debug
print more logs (default: false) no true
Note: if parameter
defaultDataset
/defaultDataIp
is not specified in driver configuration, then parameterdataset
/dataIp
must be specified in StorageClass configuration.Note: all default parameters (
default*
) may be overwritten in specific StorageClass configuration.Note: if
defaultMountFsType
is set tocifs
then parameterdefaultMountOptions
must include CIFS username and password (username=admin,password=123
). -
Create Kubernetes secret from the file:
kubectl create secret generic nexentastor-csi-driver-config --from-file=deploy/kubernetes/nexentastor-csi-driver-config.yaml
-
Register driver to Kubernetes:
kubectl apply -f deploy/kubernetes/nexentastor-csi-driver.yaml
NexentaStor CSI driver's pods should be running after installation:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nexentastor-csi-controller-0 3/3 Running 0 42s
nexentastor-csi-node-cwp4v 2/2 Running 0 42s
For dynamic volume provisioning, the administrator needs to set up a StorageClass pointing to the driver.
In this case Kubernetes generates volume name automatically (for example pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8
).
Default driver configuration may be overwritten in parameters
section:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexentastor-csi-driver-cs-nginx-dynamic
provisioner: nexentastor-csi-driver.nexenta.com
mountOptions: # list of options for `mount -o ...` command
# - noatime #
parameters:
#dataset: customPool/customDataset # to overwrite "defaultDataset" config property [pool/dataset]
#dataIp: 20.20.20.253 # to overwrite "defaultDataIp" config property
#mountFsType: nfs # to overwrite "defaultMountFsType" config property
#mountOptions: noatime # to overwrite "defaultMountOptions" config property
Name | Description | Example |
---|---|---|
dataset |
parent dataset for driver's filesystems [pool/dataset] | customPool/customDataset |
dataIp |
NexentaStor data IP or HA VIP for mounting shares | 20.20.20.253 |
mountFsType |
mount filesystem type nfs, cifs | cifs |
mountOptions |
NFS/CIFS mount options: mount -o ... |
NFS: noatime CIFS: username=admin,password=123 |
Run Nginx pod with dynamically provisioned volume:
kubectl apply -f examples/kubernetes/nginx-dynamic-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-dynamic-volume.yaml
The driver can use already existing NexentaStor filesystem, in this case, StorageClass, PersistentVolume and PersistentVolumeClaim should be configured.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexentastor-csi-driver-cs-nginx-persistent
provisioner: nexentastor-csi-driver.nexenta.com
mountOptions: # list of options for `mount -o ...` command
# - noatime #
parameters:
#dataset: customPool/customDataset # to overwrite "defaultDataset" config property [pool/dataset]
#dataIp: 20.20.20.253 # to overwrite "defaultDataIp" config property
#mountFsType: nfs # to overwrite "defaultMountFsType" config property
#mountOptions: noatime # to overwrite "defaultMountOptions" config property
apiVersion: v1
kind: PersistentVolume
metadata:
name: nexentastor-csi-driver-pv-nginx-persistent
labels:
name: nexentastor-csi-driver-pv-nginx-persistent
spec:
storageClassName: nexentastor-csi-driver-cs-nginx-persistent
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: nexentastor-csi-driver.nexenta.com
volumeHandle: csiDriverPool/csiDriverDataset/nginx-persistent
#mountOptions: # list of options for `mount` command
# - noatime #
CSI Parameters:
Name | Description | Example |
---|---|---|
driver |
installed driver name "nexentastor-csi-driver.nexenta.com" | nexentastor-csi-driver.nexenta.com |
volumeHandle |
path to existing NexentaStor filesystem [pool/dataset/filesystem] | PoolA/datasetA/nginx |
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexentastor-csi-driver-pvc-nginx-persistent
spec:
storageClassName: nexentastor-csi-driver-cs-nginx-persistent
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
# to create 1-1 relationship for pod - persistent volume use unique labels
name: nexentastor-csi-driver-pv-nginx-persistent
Run nginx server using PersistentVolume.
Note: Pre-configured filesystem should exist on the NexentaStor:
csiDriverPool/csiDriverDataset/nginx-persistent
.
kubectl apply -f examples/kubernetes/nginx-persistent-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-persistent-volume.yaml
We can create a clone of an existing csi volume.
To do so, we need to create a PersistentVolumeClaim with dataSource spec pointing to an existing PVC that we want to clone.
In this case Kubernetes generates volume name automatically (for example pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8
).
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexentastor-csi-driver-pvc-nginx-dynamic-clone
spec:
storageClassName: nexentastor-csi-driver-cs-nginx-dynamic
dataSource:
kind: PersistentVolumeClaim
apiGroup: ""
name: nexentastor-csi-driver-pvc-nginx-dynamic # pvc name
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Run Nginx pod with dynamically provisioned volume:
kubectl apply -f examples/kubernetes/nginx-clone-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-clone-volume.yaml
Note: this feature is an alpha feature.
# create snapshot class
kubectl apply -f examples/kubernetes/snapshot-class.yaml
# take a snapshot
kubectl apply -f examples/kubernetes/take-snapshot.yaml
# deploy nginx pod with volume restored from a snapshot
kubectl apply -f examples/kubernetes/nginx-snapshot-volume.yaml
# snapshot classes
kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
# snapshot list
kubectl get volumesnapshots.snapshot.storage.k8s.io
# snapshot content list
kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
Using the same files as for installation:
# delete driver
kubectl delete -f deploy/kubernetes/nexentastor-csi-driver.yaml
# delete secret
kubectl delete secret nexentastor-csi-driver-config
- Show installed drivers:
kubectl get csidrivers kubectl describe csidrivers
- Error:
Make sure kubelet configured with
MountVolume.MountDevice failed for volume "pvc-ns-<...>" : driver name nexentastor-csi-driver.nexenta.com not found in the list of registered CSI drivers
--root-dir=/var/lib/kubelet
, otherwise update paths in the driver yaml file (all requirements). - "VolumeSnapshotDataSource" feature gate is disabled:
vim /var/lib/kubelet/config.yaml # ``` # featureGates: # VolumeSnapshotDataSource: true # ``` vim /etc/kubernetes/manifests/kube-apiserver.yaml # ``` # - --feature-gates=VolumeSnapshotDataSource=true # ```
- Driver logs
kubectl logs -f nexentastor-csi-controller-0 driver kubectl logs -f $(kubectl get pods | awk '/nexentastor-csi-node/ {print $1;exit}') driver
- Show termination message in case driver failed to run:
kubectl get pod nexentastor-csi-controller-0 -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
- Configure Docker to trust insecure registries:
# add `{"insecure-registries":["10.3.199.92:5000"]}` to: vim /etc/docker/daemon.json service docker restart
Commits should follow Conventional Commits Spec.
Commit messages which include feat:
and fix:
prefixes will be included in CHANGELOG automatically.
# print variables and help
make
# build go app on local machine
make build
# build container (+ using build container)
make container-build
# update deps
~/go/bin/dep ensure
Without installation to k8s cluster only version command works:
./bin/nexentastor-csi-driver --version
# push the latest built container to the local registry (see `Makefile`)
make container-push-local
# push the latest built container to hub.docker.com
make container-push-remote
test-all-*
instructions run:
- unit tests
- CSI sanity tests from https://github.com/kubernetes-csi/csi-test
- End-to-end driver tests with real K8s and NS appliances.
See Makefile for more examples.
# Test options to be set before run tests:
# - NOCOLORS=true # to run w/o colors
# - TEST_K8S_IP=10.3.199.250 # e2e k8s tests
# run all tests using local registry (`REGISTRY_LOCAL` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-local-image
# run all tests using hub.docker.com registry (`REGISTRY` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-remote-image
# run tests in container:
# - RSA keys from host's ~/.ssh directory will be used by container.
# Make sure all remote hosts used in tests have host's RSA key added as trusted
# (ssh-copy-id -i ~/.ssh/id_rsa.pub user@host)
#
# run all tests using local registry (`REGISTRY_LOCAL` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-local-image-container
# run all tests using hub.docker.com registry (`REGISTRY` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-remote-image-container
End-to-end K8s test parameters:
# Tests install driver to k8s and run nginx pod with mounted volume
# "export NOCOLORS=true" to run w/o colors
go test tests/e2e/driver_test.go -v -count 1 \
--k8sConnectionString="[email protected]" \
--k8sDeploymentFile="../../deploy/kubernetes/nexentastor-csi-driver.yaml" \
--k8sSecretFile="./_configs/driver-config-single-default.yaml"
All development happens in master
branch,
when it's time to publish a new version,
new git tag should be created.
-
Build and test the new version using local registry:
# build development version: make container-build # publish to local registry make container-push-local # test plugin using local registry TEST_K8S_IP=10.3.199.250 make test-all-local-image-container
-
To release a new version run command:
VERSION=X.X.X make release
This script does following:
- generates new
CHANGELOG.md
- builds driver container 'nexentastor-csi-driver'
- Login to hub.docker.com will be requested
- publishes driver version 'nexenta/nexentastor-csi-driver:X.X.X' to hub.docker.com
- creates new Git tag 'vX.X.X' and pushes to the repository.
- generates new
-
Update Github releases.