Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshots not working because of incorrect volumeHandle #84

Open
emmetog opened this issue May 31, 2024 · 8 comments
Open

Snapshots not working because of incorrect volumeHandle #84

emmetog opened this issue May 31, 2024 · 8 comments

Comments

@emmetog
Copy link

emmetog commented May 31, 2024

So I'm digging here and trying to debug why my snapshots are not working and I think I've come across a bug in how the snapshotter finds the volume in DSM.

To reproduce

First we create a PVC with the storage class set to use the Synology CSI driver, here's an example:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  labels:
    app: synology-test
spec:
  storageClassName: synology-csi-delete
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

The synology-csi-controller (the provisioner) then successfully creates a PV and a LUN on my NAS, here's an example of the full PV definition that it creates in kubernetes:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pvc-2b7d51ac-0e95-4d36-9dc5-675c057d8324
  uid: cfe95394-0143-4047-9aaf-29467a3cb81b
  resourceVersion: '57310466'
  creationTimestamp: '2024-05-31T15:24:01Z'
  annotations:
    pv.kubernetes.io/provisioned-by: csi.san.synology.com
    volume.kubernetes.io/provisioner-deletion-secret-name: ''
    volume.kubernetes.io/provisioner-deletion-secret-namespace: ''
  finalizers:
    - kubernetes.io/pv-protection
  managedFields:
    - manager: csi-provisioner
      operation: Update
      apiVersion: v1
      time: '2024-05-31T15:24:01Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:pv.kubernetes.io/provisioned-by: {}
            f:volume.kubernetes.io/provisioner-deletion-secret-name: {}
            f:volume.kubernetes.io/provisioner-deletion-secret-namespace: {}
        f:spec:
          f:accessModes: {}
          f:capacity:
            .: {}
            f:storage: {}
          f:claimRef:
            .: {}
            f:apiVersion: {}
            f:kind: {}
            f:name: {}
            f:namespace: {}
            f:resourceVersion: {}
            f:uid: {}
          f:csi:
            .: {}
            f:driver: {}
            f:volumeAttributes:
              .: {}
              f:dsm: {}
              f:formatOptions: {}
              f:protocol: {}
              f:source: {}
              f:storage.kubernetes.io/csiProvisionerIdentity: {}
            f:volumeHandle: {}
          f:persistentVolumeReclaimPolicy: {}
          f:storageClassName: {}
          f:volumeMode: {}
    - manager: k3s
      operation: Update
      apiVersion: v1
      time: '2024-05-31T15:24:01Z'
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          f:phase: {}
      subresource: status
  selfLink: /api/v1/persistentvolumes/pvc-2b7d51ac-0e95-4d36-9dc5-675c057d8324
status:
  phase: Bound
  lastPhaseTransitionTime: '2024-05-31T15:24:01Z'
spec:
  capacity:
    storage: 1Gi
  csi:
    driver: csi.san.synology.com
    volumeHandle: 5743c6be-31e6-4528-a55f-2254b5716227
    volumeAttributes:
      dsm: <redacted>
      formatOptions: ''
      protocol: iscsi
      source: ''
      storage.kubernetes.io/csiProvisionerIdentity: 1717158367872-7270-csi.san.synology.com
  accessModes:
    - ReadWriteMany
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: test-claim
    uid: 2b7d51ac-0e95-4d36-9dc5-675c057d8324
    apiVersion: v1
    resourceVersion: '57310439'
  persistentVolumeReclaimPolicy: Delete
  storageClassName: synology-csi-delete
  volumeMode: Filesystem

The PVC itself works fine in that the pods have persistent storage and things look ok in DSM, however note the volumeHandle: 5743c6be-31e6-4528-a55f-2254b5716227 line, as that's a problem for snapshots.

Next we'll try to trigger a snapshot, I'm using velero to do this but I'd imagine that there are other ways of triggering this driver to take a snapshot:

velero backup create 20240531-0927"

The snapshot doesn't work correctly, we see this in the logs of the synology-csi-snapshotter-0 pod:

E0531 15:02:03.554735       1 snapshot_controller.go:124] checkandUpdateContentStatus [snapcontent-0dd6ef5d-f940-41d1-8ed0-5face1bf7ef6]: error occurred failed to take snapshot of the volume 5743c6be-31e6-4528-a55f-2254b5716227: "rpc error: code = NotFound desc = Can't find volume[5743c6be-31e6-4528-a55f-2254b5716227]."
E0531 15:02:03.554753       1 snapshot_controller_base.go:283] could not sync content "snapcontent-0dd6ef5d-f940-41d1-8ed0-5face1bf7ef6": failed to take snapshot of the volume 5743c6be-31e6-4528-a55f-2254b5716227: "rpc error: code = NotFound desc = Can't find volume[5743c6be-31e6-4528-a55f-2254b5716227]."
I0531 15:02:03.554772       1 snapshot_controller_base.go:185] Failed to sync content "snapcontent-0dd6ef5d-f940-41d1-8ed0-5face1bf7ef6", will retry again: failed to take snapshot of the volume 5743c6be-31e6-4528-a55f-2254b5716227: "rpc error: code = NotFound desc = Can't find volume[5743c6be-31e6-4528-a55f-2254b5716227]."

Here is the contents of the snapshot.storage.k8s.io/VolumeSnapshotContent object that is created:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
  annotations:
    snapshot.storage.kubernetes.io/volumesnapshot-being-created: 'yes'
  creationTimestamp: '2024-05-31T15:33:48Z'
  finalizers:
    - snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection
  generation: 1
  managedFields:
    - apiVersion: snapshot.storage.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          .: {}
          f:error:
            .: {}
            f:message: {}
            f:time: {}
          f:readyToUse: {}
      manager: csi-snapshotter
      operation: Update
      subresource: status
      time: '2024-05-31T15:33:48Z'
    - apiVersion: snapshot.storage.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:finalizers:
            .: {}
            v:"snapshot.storage.kubernetes.io/volumesnapshotcontent-bound-protection": {}
        f:spec:
          .: {}
          f:deletionPolicy: {}
          f:driver: {}
          f:source:
            .: {}
            f:volumeHandle: {}
          f:sourceVolumeMode: {}
          f:volumeSnapshotClassName: {}
          f:volumeSnapshotRef:
            .: {}
            f:apiVersion: {}
            f:kind: {}
            f:name: {}
            f:namespace: {}
            f:resourceVersion: {}
            f:uid: {}
      manager: snapshot-controller
      operation: Update
      time: '2024-05-31T15:33:48Z'
    - apiVersion: snapshot.storage.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:snapshot.storage.kubernetes.io/volumesnapshot-being-created: {}
      manager: csi-snapshotter
      operation: Update
      time: '2024-05-31T15:34:14Z'
  name: snapcontent-0dd6ef5d-f940-41d1-8ed0-5face1bf7ef6
  resourceVersion: '57314468'
  uid: 0eb706be-8ac2-40bb-876c-06c85cdae257
  selfLink: >-
    /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/snapcontent-0dd6ef5d-f940-41d1-8ed0-5face1bf7ef6
status:
  error:
    message: >-
      Failed to check and update snapshot content: failed to take snapshot of
      the volume 5743c6be-31e6-4528-a55f-2254b5716227: "rpc error: code =
      NotFound desc = Can't find volume[5743c6be-31e6-4528-a55f-2254b5716227]."
    time: '2024-05-31T15:33:48Z'
  readyToUse: false
spec:
  deletionPolicy: Delete
  driver: csi.san.synology.com
  source:
    volumeHandle: 5743c6be-31e6-4528-a55f-2254b5716227
  sourceVolumeMode: Filesystem
  volumeSnapshotClassName: synology-vsc
  volumeSnapshotRef:
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshot
    name: velero-data-test-claim-copy-1716317686-kb2zm
    namespace: production
    resourceVersion: '57314197'
    uid: 0dd6ef5d-f940-41d1-8ed0-5face1bf7ef6

In my DSM the LUN for this PV is called k8s-csi-pvc-2b7d51ac-0e95-4d36-9dc5-675c057d8324, which matches with the PV's name (although with k8s-csi- prefixed). There is no mention of the volumeHandle: 5743c6be-31e6-4528-a55f-2254b5716227 in DSM.

My interpretation

What I think is happening here is that when the PV is created, the driver doesn't specify the volumeHandle and so the CSI assigns a random unique id. Then when the snapshot is triggered, this driver tries to use the volumeHandle field to match the volume in DSM, but of course that doesn't exist.

If I'm right then the solution is either:

  • On PV creation, set the volumeHandle explicitly to the name of the LUN in DSM
  • Or, when the snapshot is triggered, regenerate the name of the LUN from the name of the PV (using whatever format was used when the LUN was created, ie prefixing k8s-csi- to the PV name), and ignore the volumeHandle entirely

Versions / Environment

For versions, I'm using kubernetes version v1.29.2+k3s1 and I'm using flux to deploy the Synology driver using the chart here, here's my manifest:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: synology-csi
spec:
  releaseName: synology-csi
  chart:
    spec:
      chart: synology-csi
      sourceRef:
        kind: HelmRepository
        name: synology-csi-chart
        namespace: default
      version: "0.9.8"  # Only change this if the chart itself changes
  interval: 24h
  values:
    clientInfoSecret:
      # How to connect to your Synology Diskstation?
#      clients:
#        - host: <redacted>    # the IP address of the Diskstation
#          https: true                   # whether the port expects HTTPS or not
#          password: password            # the password of the dedicated CSI user
#          port: 5000                    # the port for connecting to the Diskstation Manager application
#          username: username            # the name of the dedicated CSI user
      # Whether to create the secret if the chart gets installed or not; ignored on updates.
      create: false
      # Defaults to {{ include "synology-csi.fullname" $ }}-client-info if empty or not present:
      name: "synology-csi-nas-account-credentials"
    # Specifies affinity, nodeSelector and tolerations for the controller StatefulSet
    controller:
      affinity: { }
      nodeSelector: { }
      tolerations: []
    fullnameOverride: ""
    images:
      attacher:
        image: registry.k8s.io/sig-storage/csi-attacher
        pullPolicy: IfNotPresent
        tag: v4.3.0
      nodeDriverRegistrar:
        image: registry.k8s.io/sig-storage/csi-node-driver-registrar
        pullPolicy: IfNotPresent
        tag: v2.8.0
      plugin:
        image: synology/synology-csi
        pullPolicy: IfNotPresent
        # Defaults to {{ $.Chart.AppVersion }} if empty or not present:
        tag: ""
      provisioner:
        image: registry.k8s.io/sig-storage/csi-provisioner
        pullPolicy: IfNotPresent
        tag: v3.5.0
      resizer:
        image: registry.k8s.io/sig-storage/csi-resizer
        pullPolicy: IfNotPresent
        tag: v1.8.0
      snapshotter:
        image: registry.k8s.io/sig-storage/csi-snapshotter
        pullPolicy: IfNotPresent
        tag: v6.2.2
    installCSIDriver: false
    nameOverride: ""
    # Specifies affinity, nodeSelector and tolerations for the node DaemonSet
    node:
      affinity: { }
      # If your kubelet path is not standard, specify it here :
      # example for miocrok8s distrib : /var/snap/microk8s/common/var/lib/kubelet
      kubeletPath: /var/lib/kubelet
      nodeSelector: { }
      tolerations: []
    # Specifies affinity, nodeSelector and tolerations for the snapshotter StatefulSet
    snapshotter:
      affinity: { }
      nodeSelector: { }
      tolerations: []
    storageClasses:
      delete:
        # One of true or false (default):
        disabled: false
        # One of true or false (default):
        isDefault: false
        # One of "Retain" or "Delete" (default):
        reclaimPolicy: Delete
        # One of "WaitForFirstConsumer" or "Immediate" (default):
        volumeBindingMode: Immediate

        # If not present, some location will be chosen to create volumes with the filesystem type ext4.
        # Note that these parameters cannot get updated once deployed - any subsequent changes get ignored!
        parameters:
          type: thin
      retain:
        reclaimPolicy: Retain
        isDefault: true
    volumeSnapshotClasses:
      delete:
        # One of true or false (default):
        disabled: true
        # One of "Retain" or "Delete" (default):
        deletionPolicy: Delete
        # One of true or false (default):
        isDefault: false

        # Note that these parameters cannot get updated once deployed - any subsequent changes get ignored!
        parameters:
          description: "Kubernetes CSI"
      ##  is_locked: "false"
      retain:
        # One of true or false (default):
        disabled: true
        # One of "Retain" or "Delete" (default):
        deletionPolicy: Retain
        # One of true or false (default):
        isDefault: false

        # Note that these parameters cannot get updated once deployed - any subsequent changes get ignored!
        parameters:
          description: "Kubernetes CSI"
      ##  is_locked: "false"

And I'm installing the CSI snapshot controller separately using this chart, like this:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: snapshot-controller
spec:
  releaseName: snapshot-controller
  chart:
    spec:
      chart: snapshot-controller
      version: "2.2.2"
      sourceRef:
        kind: HelmRepository
        name: piraeus
        namespace: default
  interval: 5m
  install:
    remediation:
      retries: 3
  values:

    # Values taken from here:
    # https://artifacthub.io/packages/helm/piraeus-charts/snapshot-controller/2.2.2?modal=values

    controller:
      enabled: true

      replicaCount: 1

      revisionHistoryLimit: 10

      args:
        leaderElection: true
        leaderElectionNamespace: "$(NAMESPACE)"
        httpEndpoint: ":8080"

      image:
        repository: registry.k8s.io/sig-storage/snapshot-controller
        pullPolicy: IfNotPresent
        # Overrides the image tag whose default is the chart appVersion.
        tag: ""

      imagePullSecrets: [ ]

      podAnnotations: { }

      podLabels: { }

      podSecurityContext: { }
      # fsGroup: 2000

      securityContext:
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
        runAsNonRoot: true
        runAsUser: 1000

      resources: { }

      nodeSelector: { }

      tolerations: [ ]

      affinity: { }

      ## Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
      ##
      pdb: { }

      topologySpreadConstraints: [ ]

      rbac:
        # Specifies whether RBAC resources should be created
        create: true

      serviceAccount:
        # Specifies whether a ServiceAccount should be created
        create: true
        name: ""

      serviceMonitor:
        # Specifies whether a ServiceMonitor should be created
        create: false

      volumeSnapshotClasses: [ ]
      #    - name: linstor-csi-delete
      #      annotations:
      #        snapshot.storage.kubernetes.io/is-default-class: "true"
      #      labels:
      #        velero.io/csi-volumesnapshot-class: "true"
      #      driver: linstor.csi.linbit.com
      #      deletionPolicy: Delete

      priorityClassName: ""
      # Specifies wether a Priority Class should be attached to deployment pods

      # Change `hostNetwork` to `true` when you want the pod to share its host's network namespace.
      hostNetwork: false

      # DNS settings for the controller pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
      dnsConfig: { }
      # DNS Policy for controller pod. For Pods running with hostNetwork, set to `ClusterFirstWithHostNet`
      # For further reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy.
      dnsPolicy: ClusterFirst

    webhook:
      enabled: true

      replicaCount: 1

      revisionHistoryLimit: 10

      args:
        tlsPrivateKeyFile: /etc/snapshot-validation/tls.key
        tlsCertFile: /etc/snapshot-validation/tls.crt
        port: 8443
        # enableVolumeGroupSnapshotWebhook: true

      image:
        repository: registry.k8s.io/sig-storage/snapshot-validation-webhook
        pullPolicy: IfNotPresent
        # Overrides the image tag whose default is the chart appVersion.
        tag: ""

      webhook:
        timeoutSeconds: 2
        failurePolicy: Fail

      tls:
        certificateSecret: ""
        autogenerate: true
        renew: false
        certManagerIssuerRef: { }

      imagePullSecrets: [ ]
      podAnnotations: { }
      podLabels: { }

      networkPolicy:
        enabled: false
        ingress: { }
        # - from:
        #   - ipBlock:
        #       cidr: 0.0.0.0/0

      ## Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
      ##
      pdb: { }

      priorityClassName:

      ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
      ##
      topologySpreadConstraints: [ ]
        # - maxSkew: 1
        #   topologyKey: topology.kubernetes.io/zone
        #   whenUnsatisfiable: ScheduleAnyway
      #   labelSelector:
      #     matchLabels:
      #       app.kubernetes.io/instance: snapshot-validation-webhook

      podSecurityContext: { }
      # fsGroup: 2000

      securityContext:
        capabilities:
          drop:
            - ALL
        readOnlyRootFilesystem: true
        runAsNonRoot: true
        runAsUser: 1000

      resources: { }

      nodeSelector: { }

      tolerations: [ ]

      affinity: { }

      serviceAccount:
        create: true
        name: ""

      rbac:
        create: true

      # Change `hostNetwork` to `true` when you want the pod to share its host's network namespace.
      hostNetwork: false

      # DNS settings for the webhook pod. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config
      dnsConfig: { }
      # DNS Policy for webhook pod. For Pods running with hostNetwork, set to `ClusterFirstWithHostNet`
      # For further reference: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy.
      dnsPolicy: ClusterFirst

      tests:
        nodeSelector: { }

        tolerations: [ ]

        affinity: { }
@iamasmith
Copy link

iamasmith commented Jul 29, 2024

Mine looks to be failing earlier.

I'll do some digging but for me I applied the YAML deployments and the CRDs/Snapshotcontroller from commit ref c72f087f751abb285a90c4f5bb02df9014d2bc19 of https://github.com/kubernetes-csi/external-snapshotter.git

I might try rolling this back to a release tag so I'm not on any pre-release stages but this is what I have.

Also interesting reading in the v7.0.0 changes around the webhook being deprecated.

There's also quite a bit of automation in this test since I'm testing with CloudnativePG operator which manages all the snapshot and restore capability for this cluster instance. It's all very cool but it may be out of step with some of the snapshot controller logic.

    message: 'Failed to check and update snapshot content: failed to add VolumeSnapshotBeingCreated
      annotation on the content snapcontent-f74bcada-5c0c-469f-a43d-6914c236c786:
      "snapshot controller failed to update snapcontent-f74bcada-5c0c-469f-a43d-6914c236c786
      on API server: VolumeSnapshotContent.snapshot.storage.k8s.io \"snapcontent-f74bcada-5c0c-469f-a43d-6914c236c786\"
      is invalid: spec: Invalid value: \"object\": sourceVolumeMode is required once
      set"'

The VolumeSnapshot and VolumeSnapshotContent were created but of course the snapshot itself wasn't, had to remove the finalizers to get rid of them. I have updated CNPG to the minor increment to be on the latest but I'll have a proper poke around at some point.

I didn't see your error but I suspect I might once I have gotten to the bottom of this first hurdle.

@iamasmith
Copy link

iamasmith commented Jul 30, 2024

@emmetog OK I have this working, it's actually quite a simple issue but it's just locked into the configs that they give you here.

The maturity of https://github.com/kubernetes-csi/external-snapshotter/tree/master drives some requirements for the synology-csi snapshotter config which is quite old as shown in this repo and uses the container registry.k8s.io/sig-storage/csi-snapshotter:v4.2.1 as the actual interface which talks grpc to the synology driver (this image) in the same pod for snapshot functions.

The grpc interface is stabilised so to match the requirements of a later version of the snapshot controller, which goes into kube-system, sets and acts on certain elements of the CRD, the csi-snapshotter also has to behave according to the updated spec.

This doesn't effect the CSI driver but you will need to update the IMAGE and RBAC for the synology-csi snapshotter container in the snapshotter pod to match the version of the controller so it can do all the resource changes it does in response to the snapshot triggers and meet the spec set by the snapshot crds and snpshot controller.

For me the diff to deploy/kubernetes/v1.20/snapshotter/snapshot.yaml looks like this assuming the current head of the project and using the v7.0.1 image also for the snapshot controller.

In terms of compatibility this is a good approach, the snapshotter image version must match the controller and the contract sets out the RBAC requirements for the SA. The CSI driver lives as a sidecar to the snapshotter and has a stable grpc interface so that's also good.

What we lack in this project docs is this information showing that the deployment is part based on an old version of the external-snapshot project and needs to be maintained.

I'm just showing you a diff here to show what I updated but best to lift and shift current state from https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/csi-snapshotter for the specific version you are updating to.

diff --git a/deploy/kubernetes/v1.20/snapshotter/snapshotter.yaml b/deploy/kubernetes/v1.20/snapshotter/snapshotter.yaml
index e5d1feb..ae968c1 100644
--- a/deploy/kubernetes/v1.20/snapshotter/snapshotter.yaml
+++ b/deploy/kubernetes/v1.20/snapshotter/snapshotter.yaml
@@ -13,16 +13,44 @@ rules:
   - apiGroups: [""]
     resources: ["events"]
     verbs: ["list", "watch", "create", "update", "patch"]
+  # Secret permission is optional.
+  # Enable it if your driver needs secret.
+  # For example, `csi.storage.k8s.io/snapshotter-secret-name` is set in VolumeSnapshotClass.
+  # See https://kubernetes-csi.github.io/docs/secrets-and-credentials.html for more details.
+  #  - apiGroups: [""]
+  #    resources: ["secrets"]
+  #    verbs: ["get", "list"]
   - apiGroups: ["snapshot.storage.k8s.io"]
     resources: ["volumesnapshotclasses"]
     verbs: ["get", "list", "watch"]
+  - apiGroups: ["snapshot.storage.k8s.io"]
+    resources: ["volumesnapshots"]
+    verbs: ["get", "list", "watch", "update", "patch", "create"]
   - apiGroups: ["snapshot.storage.k8s.io"]
     resources: ["volumesnapshotcontents"]
-    verbs: ["create", "get", "list", "watch", "update", "delete"]
+    verbs: ["get", "list", "watch", "update", "patch", "create"]
   - apiGroups: ["snapshot.storage.k8s.io"]
     resources: ["volumesnapshotcontents/status"]
-    verbs: ["update"]
-
+    verbs: ["update", "patch"]
+  - apiGroups: ["groupsnapshot.storage.k8s.io"]
+    resources: ["volumegroupsnapshotclasses"]
+    verbs: ["get", "list", "watch"]
+  - apiGroups: ["groupsnapshot.storage.k8s.io"]
+    resources: ["volumegroupsnapshotcontents"]
+    verbs: ["get", "list", "watch", "update", "patch"]
+  - apiGroups: ["groupsnapshot.storage.k8s.io"]
+    resources: ["volumegroupsnapshotcontents/status"]
+    verbs: ["update", "patch"]
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+  namespace: synology-csi
+  name: synology-snapshotter-leaderelection
+rules:
+- apiGroups: ["coordination.k8s.io"]
+  resources: ["leases"]
+  verbs: ["get", "watch", "list", "delete", "update", "create"]

@iamasmith
Copy link

iamasmith commented Jul 30, 2024

Noting this possibly isn't the whole end of the issue, attempting to create a new CPNG instance off a snapshot gave me errors about the mount point already existing but since the real controller for the recreation of the image is possibly the node-driver-registrar (the description doesn't completely match but since this IS the DaemonSet and the only pod guaranteed per node it's my first guess) this possibly also needs to be updated in the DaemonSet and also accordingly any changes to RBAC.

Again, this may work when I try it but it probably is a first point of call to get the sig images described in these configs to a current version.

I've only done the snapshotter so far and need to think about dinner right now.

andrews@AndrewsiMac v1.20 % grep -r 'image: registry.k8s.io' *
controller.yml:          image: registry.k8s.io/sig-storage/csi-provisioner:v3.0.0
controller.yml:          image: registry.k8s.io/sig-storage/csi-attacher:v3.3.0
controller.yml:          image: registry.k8s.io/sig-storage/csi-resizer:v1.3.0
node.yml:          image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.3.0
snapshotter/snapshotter.yaml:          image: registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1
andrews@AndrewsiMac v1.20 % 

@iamasmith
Copy link

This does fully work with updated components, however, there is a different issue that I'm facing.

Either NodeUnstageVolume is never called or is not unmounting and disconnecting the node because when I use CloudnativePG snapshot restore it does indeed create a volume from the snapshot but then stalls and there is a logged message saying that he mount point already exists when trying to mount the PVC again.

I think this is partly init container/operator behaviour that is triggering this case but I have also found that if I delete a PV that has been in use and delete the LUN/Target from the NAS the node keeps attempting to reconnect to the target and the NAS logs messages that there was an attempt to login to an 'un-exist' Target.

I need to do a bit more digging before raising a different issue or an MR but I think the snapshot restore probably works as well.

@iamasmith
Copy link

OK, I figured out what was happening here, and everything works as it should. Including Snapshot restore.

What was happening is that I was trying to schedule a snapshot restore of the PV and connect it to the same host that the original PV has connected. This does NOT work because inside the snapshot the filesystem UUID is the same on the volume created from the snapshot and results in the mount command bombing saying 'File exists'.

If instead you restore it to another node, or have the first system shutdown and the PV unmounted then Snapshot restore works fine also.

On a final point though when this happens it leaves the scsi initiator logged into the PV so if you delete the PV from the NAS at this point you end up with those 'un-exist' Target errors whilst the initiator tries to reconnect.

@iamasmith
Copy link

iamasmith commented Jul 31, 2024

Summary here and things that might go wrong but I think this fixes the original issue and similar ones (all of this is related to iSCSI LUNs btw):

Snapshotter version

This repo contains a snapshot.yaml that references an old version of the node-driver-registrar, this image needs to be more in sync with the current version of the snapshot CRDs and Snapshot controller. Version 4.0.0 is listed in the README.txt but this is quite out of date with the snapshot controller being v7.0.1 at this point in time so an update to the image in the snapshotter.yaml AND the associated RBAC for the service account based upon current RBAC is probably what is needed.

I'm using K8S 1.29 and will soon migrate to 1.30 so I think it's good to keep these versions up to date.

The CSI driver is still in spec to work with the various components if they are updated but does flag as using a 'trivial' provider when a modern version of the csi-attacher is used due to the lack of the ControllerPublishUnpublish capability that the current CSI spec says is a minimum even though it offers little utility for this use case.

Features relevant to the CSI are generally called directly by Kubelet but the snapshot controller may well pass volume name information out of step with a significantly different node-driver-registrar and it is definite that this being out of step with the controller/CRDs produces errors at the time of creating the snapshot.

It's quite probable that getting this out of sync by using helm with latest versions, or missing RBAC because of change in requirements not delivered by helm could lead you astray so possibly test first with manual updates to the chart.

I deployed successfuly the following image changes.

controller.yml:

-          image: registry.k8s.io/sig-storage/csi-provisioner:v3.0.0
+          image: registry.k8s.io/sig-storage/csi-provisioner:v5.0.1

-          image: registry.k8s.io/sig-storage/csi-attacher:v3.3.0
+          image: registry.k8s.io/sig-storage/csi-attacher:v4.6.1

-          image: registry.k8s.io/sig-storage/csi-resizer:v1.3.0
+          image: registry.k8s.io/sig-storage/csi-resizer:v1.11.2

node.yml (also needing RBAC changes noted above)

-          image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.3.0
+          image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1

If using microk8s

If using microk8s then node.yml also needs every occurance of /var/lib/kubelet needs to be updated so that a path to a real location inside the snap install can be referenced, in the standard install /var/lib/kubelet is a symlink into /var/snap/microk8s/common/var/lib/kubelet but this is often on a separate partition and will not be traversed properly by the node-driver-registrar

Replace that part of the path as mentioned in the volumes section path under the kubelet-dir, plugin-dir and registration-dir hostPath path entries.

Snapshot restore testing

Do NOT attempt to restore a snapshot of a volume to the same node that already has the source PV still mounted.

The volume created from the snapshot willl have the original random identifier GUID from the volume it was cloned upon and will not mount and just give a 'File exist' error and probably leave your node in an odd state wrt to the scsi initiator (see above for gory details).

If you stop the original workload and allow the driver to unmount the volume then you should be able to mount the volume created off the snapshot on the same node, alternatively if you are testing side by side mount it on another node.

It may be possible for you to re-touch the volume UUID but this is a filesystem specific task and you must take care to attach the iSCSI volume but not have it mounted/read write when doing so. For btrfs this would be a btrfsutil -u operation. Performing this task whilst mounted will almost certainly corrupt the volume as it dismounts and the OS flushes changes to the volume that no longer match the identifier.

Safest approach is avoid using a volume created off another on the same node.

Final note

It goes without saying, test with a test environment and make sure you have other backups!

@iamasmith
Copy link

iamasmith commented Aug 7, 2024

Noting updating VolumeSnapshot controller might be quite important with 1.30 K8S. I'm hanging back in home lab because I'm running CNPG because 1.30 is unsupported and there still seem still a few things to fix irrespective of iSCSI pvs and snapshots - a bit of a shame since my GKE test clusters are on 1.30 already (no CNPG) at work but I prefer to test on home lab first still.

@tzago
Copy link

tzago commented Sep 22, 2024

Hi @emmetog have you guys figured out how the volumeHandle is mapped with DSM ISCSI target IQN ?
I also opened an issue #90 as my case is not exactly snapshot but a LUN migration between two different synology boxes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants