Skip to content
This repository has been archived by the owner on Jul 6, 2022. It is now read-only.

AttachVolume.Attach failed for volume invalid character 'j' looking for beginning of value #50

Closed
righteousjester opened this issue Jul 9, 2018 · 26 comments
Assignees
Labels

Comments

@righteousjester
Copy link

Receiving the following error when a pod tries to attach to a PV.

  Type     Reason                 Age   From                             Message
  ----     ------                 ----  ----                             -------
  Normal   Scheduled              28s   default-scheduler                Successfully assigned docker-registry-4-tk76w to nbschv091.telkom.co.za
  Normal   SuccessfulMountVolume  28s   kubelet, nbschv091.telkom.co.za  MountVolume.SetUp succeeded for volume "registry-token-d9rqk"
  Normal   SuccessfulMountVolume  28s   kubelet, nbschv091.telkom.co.za  MountVolume.SetUp succeeded for volume "registry-certificates"
  Warning  FailedAttachVolume     27s   attachdetach-controller          AttachVolume.Attach failed for volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" : invalid character 'j' looking for beginning of value

PV is created successfully

oc get pv
NAME                                       CAPACITY       ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM              STORAGECLASS   REASON    AGE
pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad   107374182400   RWO            Delete           Bound     default/registry   ovirt                    17m

And I can confirm I can see the Disk in RHV.

The pod has be set to always start on the same node: nbschv091.telkom.co.za
I am picking on the docker-registry pod
nbschv091.telkom.co.za is one of our nodes selected to host Openshift infrastructure components
Here are the logs from that server when that pod starts up

Jul 09 20:44:46 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:46.557961   17735 kubelet.go:1860] SyncLoop (ADD, "api"): "docker-registry-4-deploy_default(26e98c49-83a8-11e8-b1fa-001a4a1601ad)"
Jul 09 20:44:46 nbschv091.telkom.co.za systemd[1]: Created slice libcontainer container kubepods-besteffort-pod26e98c49_83a8_11e8_b1fa_001a4a1601ad.slice.
Jul 09 20:44:46 nbschv091.telkom.co.za systemd[1]: Starting libcontainer container kubepods-besteffort-pod26e98c49_83a8_11e8_b1fa_001a4a1601ad.slice.
Jul 09 20:44:46 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:46.602245   17735 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "deployer-token-5kzg5" (UniqueName: "kubernetes.io/secret/26e98c49-83a8-11e8-b1fa-001a4a1601ad-deployer-token-5kzg5") pod "docker-registry-4-deploy" (UID: "26e98c49-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:46 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:46.702823   17735 reconciler.go:262] operationExecutor.MountVolume started for volume "deployer-token-5kzg5" (UniqueName: "kubernetes.io/secret/26e98c49-83a8-11e8-b1fa-001a4a1601ad-deployer-token-5kzg5") pod "docker-registry-4-deploy" (UID: "26e98c49-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:46 nbschv091.telkom.co.za systemd[1]: Started Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/26e98c49-83a8-11e8-b1fa-001a4a1601ad/volumes/kubernetes.io~secret/deployer-token-5kzg5.
Jul 09 20:44:46 nbschv091.telkom.co.za systemd[1]: Starting Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/26e98c49-83a8-11e8-b1fa-001a4a1601ad/volumes/kubernetes.io~secret/deployer-token-5kzg5.
Jul 09 20:44:46 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:46.719687   17735 operation_generator.go:552] MountVolume.SetUp succeeded for volume "deployer-token-5kzg5" (UniqueName: "kubernetes.io/secret/26e98c49-83a8-11e8-b1fa-001a4a1601ad-deployer-token-5kzg5") pod "docker-registry-4-deploy" (UID: "26e98c49-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:46 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:46.879854   17735 kuberuntime_manager.go:385] No sandbox for pod "docker-registry-4-deploy_default(26e98c49-83a8-11e8-b1fa-001a4a1601ad)" can be found. Need to start a new one
Jul 09 20:44:47 nbschv091.telkom.co.za kernel: nf_conntrack: falling back to vmalloc.
Jul 09 20:44:47 nbschv091.telkom.co.za kernel: nf_conntrack: falling back to vmalloc.
Jul 09 20:44:47 nbschv091.telkom.co.za systemd[1]: Started libcontainer container ff6ba55a615a1d3fe55f5faa8aabd51af68f83c3dc94397055b77469efde8526.
Jul 09 20:44:47 nbschv091.telkom.co.za systemd[1]: Starting libcontainer container ff6ba55a615a1d3fe55f5faa8aabd51af68f83c3dc94397055b77469efde8526.
Jul 09 20:44:47 nbschv091.telkom.co.za kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Jul 09 20:44:47 nbschv091.telkom.co.za kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
Jul 09 20:44:47 nbschv091.telkom.co.za kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Jul 09 20:44:47 nbschv091.telkom.co.za NetworkManager[7872]: <info>  [1531161887.1889] device (veth2c2ae1b3): carrier: link connected
Jul 09 20:44:47 nbschv091.telkom.co.za NetworkManager[7872]: <info>  [1531161887.1899] manager: (veth2c2ae1b3): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jul 09 20:44:47 nbschv091.telkom.co.za NetworkManager[7872]: <info>  [1531161887.1908] device (veth2c2ae1b3): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Jul 09 20:44:47 nbschv091.telkom.co.za NetworkManager[7872]: <info>  [1531161887.1947] device (veth2c2ae1b3): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'managed')
Jul 09 20:44:47 nbschv091.telkom.co.za NetworkManager[7872]: <info>  [1531161887.2060] device (veth2c2ae1b3): enslaved to non-master-type device ovs-system; ignoring
Jul 09 20:44:47 nbschv091.telkom.co.za kernel: device veth2c2ae1b3 entered promiscuous mode
Jul 09 20:44:47 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:47.652772   17735 kubelet.go:1905] SyncLoop (PLEG): "docker-registry-4-deploy_default(26e98c49-83a8-11e8-b1fa-001a4a1601ad)", event: &pleg.PodLifecycleEvent{ID:"26e98c49-83a8-11e8-b1fa-001a4a1601ad", Type:"ContainerStarted", Data:"ff6ba55a615a1d3fe55f5faa8aabd51af68f83c3dc94397055b77469efde8526"}
Jul 09 20:44:49 nbschv091.telkom.co.za systemd[1]: Started libcontainer container c2a500710f6107421a474b40703a7d4982403d2127cfca35a1dfbf46fc77197f.
Jul 09 20:44:49 nbschv091.telkom.co.za systemd[1]: Starting libcontainer container c2a500710f6107421a474b40703a7d4982403d2127cfca35a1dfbf46fc77197f.
Jul 09 20:44:49 nbschv091.telkom.co.za kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Jul 09 20:44:49 nbschv091.telkom.co.za dockerd-current[100142]: time="2018-07-09T20:44:49.130727711+02:00" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container c2a500710f6107421a474b40703a7d4982403d2127cfca35a1dfbf46fc77197f"
Jul 09 20:44:49 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:49.708570   17735 kubelet.go:1905] SyncLoop (PLEG): "docker-registry-4-deploy_default(26e98c49-83a8-11e8-b1fa-001a4a1601ad)", event: &pleg.PodLifecycleEvent{ID:"26e98c49-83a8-11e8-b1fa-001a4a1601ad", Type:"ContainerStarted", Data:"c2a500710f6107421a474b40703a7d4982403d2127cfca35a1dfbf46fc77197f"}
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.412902   17735 kubelet.go:1860] SyncLoop (ADD, "api"): "docker-registry-4-tk76w_default(2935b697-83a8-11e8-b1fa-001a4a1601ad)"
Jul 09 20:44:50 nbschv091.telkom.co.za systemd[1]: Created slice libcontainer container kubepods-burstable-pod2935b697_83a8_11e8_b1fa_001a4a1601ad.slice.
Jul 09 20:44:50 nbschv091.telkom.co.za systemd[1]: Starting libcontainer container kubepods-burstable-pod2935b697_83a8_11e8_b1fa_001a4a1601ad.slice.
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: W0709 20:44:50.497731   17735 plugin-defaults.go:32] flexVolume driver ovirt/ovirt-flexvolume-driver: using default GetVolumeName for volume pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.516991   17735 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "registry-token-d9rqk" (UniqueName: "kubernetes.io/secret/2935b697-83a8-11e8-b1fa-001a4a1601ad-registry-token-d9rqk") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.517055   17735 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "registry-certificates" (UniqueName: "kubernetes.io/secret/2935b697-83a8-11e8-b1fa-001a4a1601ad-registry-certificates") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.517160   17735 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: E0709 20:44:50.517253   17735 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad\"" failed. No retries permitted until 2018-07-09 20:46:52.517218426 +0200 SAST m=+457758.701673564 (durationBeforeRetry 2m2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad\") pod \"docker-registry-4-tk76w\" (UID: \"2935b697-83a8-11e8-b1fa-001a4a1601ad\") "
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.617707   17735 reconciler.go:262] operationExecutor.MountVolume started for volume "registry-token-d9rqk" (UniqueName: "kubernetes.io/secret/2935b697-83a8-11e8-b1fa-001a4a1601ad-registry-token-d9rqk") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.617775   17735 reconciler.go:262] operationExecutor.MountVolume started for volume "registry-certificates" (UniqueName: "kubernetes.io/secret/2935b697-83a8-11e8-b1fa-001a4a1601ad-registry-certificates") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:50 nbschv091.telkom.co.za systemd[1]: Started Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/2935b697-83a8-11e8-b1fa-001a4a1601ad/volumes/kubernetes.io~secret/registry-token-d9rqk.
Jul 09 20:44:50 nbschv091.telkom.co.za systemd[1]: Starting Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/2935b697-83a8-11e8-b1fa-001a4a1601ad/volumes/kubernetes.io~secret/registry-token-d9rqk.
Jul 09 20:44:50 nbschv091.telkom.co.za systemd[1]: Started Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/2935b697-83a8-11e8-b1fa-001a4a1601ad/volumes/kubernetes.io~secret/registry-certificates.
Jul 09 20:44:50 nbschv091.telkom.co.za systemd[1]: Starting Kubernetes transient mount for /var/lib/origin/openshift.local.volumes/pods/2935b697-83a8-11e8-b1fa-001a4a1601ad/volumes/kubernetes.io~secret/registry-certificates.
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.640547   17735 operation_generator.go:552] MountVolume.SetUp succeeded for volume "registry-token-d9rqk" (UniqueName: "kubernetes.io/secret/2935b697-83a8-11e8-b1fa-001a4a1601ad-registry-token-d9rqk") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")
Jul 09 20:44:50 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0709 20:44:50.641187   17735 operation_generator.go:552] MountVolume.SetUp succeeded for volume "registry-certificates" (UniqueName: "kubernetes.io/secret/2935b697-83a8-11e8-b1fa-001a4a1601ad-registry-certificates") pod "docker-registry-4-tk76w" (UID: "2935b697-83a8-11e8-b1fa-001a4a1601ad")

Layout of the ovirt pods

oc get po -o wide
NAME                                       READY     STATUS    RESTARTS   AGE       IP              NODE
ovirt-flexvolume-driver-2xd45              1/1       Running   0          31m       10.143.56.122   nbschv095.telkom.co.za
ovirt-flexvolume-driver-blsbd              1/1       Running   0          31m       10.143.50.70    nbschv094.telkom.co.za
ovirt-flexvolume-driver-kxq9p              1/1       Running   0          31m       10.143.38.51    nbschv096.telkom.co.za
ovirt-volume-provisioner-6997cc7cd-cfw9w   1/1       Running   0          30m       10.143.38.52    nbschv096.telkom.co.za

nbschv094 to nbschv096 are application nodes.

@righteousjester
Copy link
Author

Logs from ovirt-flexvolume-driver

oc logs ovirt-flexvolume-driver-2xd45

+ dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
+ ls -la /usr/libexec/kubernetes/kubelet-plugins/volume/exec/
total 0
drwxr-xr-x. 3 root root 43 Jun 26 16:34 .
drwxr-xr-x. 3 root root 18 Jul  6 10:30 ..
drwxr-xr-x. 2 root root 73 Jul  4 10:17 ovirt~ovirt-flexvolume-driver
+ rpm -ivh /root/ovirt-flexvolume-driver-v0.3.1-.el7.x86_64.rpm --force
Preparing...                          ########################################
Updating / installing...
ovirt-flexvolume-driver-v0.3.1-.el7   ########################################
+ cp -v /opt/ovirt-flexvolume-driver/ovirt-flexvolume-driver.conf /usr/libexec/kubernetes/kubelet-plugins/volume/exec//ovirt~ovirt-flexvolume-driver/
'/opt/ovirt-flexvolume-driver/ovirt-flexvolume-driver.conf' -> '/usr/libexec/kubernetes/kubelet-plugins/volume/exec//ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver.conf'
+ true
+ sleep 1d

Logs from volume provisioner

oc logs ovirt-volume-provisioner-6997cc7cd-cfw9w

I0709 18:18:27.071623       1 ovirt-provisioner.go:46] Provisioner external/ovirt specified
I0709 18:18:27.072286       1 ovirt-provisioner.go:75] Building kube configs for running in cluster...
I0709 18:18:27.193581       1 controller.go:407] Starting provisioner controller 798b8851-83a4-11e8-beea-0a580a8f2634!
I0709 18:22:35.447996       1 controller.go:1073] scheduleOperation[lock-provision-default/test-01[0d830c30-83a5-11e8-b1fa-001a4a1601ad]]
I0709 18:22:35.469779       1 controller.go:1073] scheduleOperation[lock-provision-default/test-01[0d830c30-83a5-11e8-b1fa-001a4a1601ad]]
I0709 18:22:35.471012       1 leaderelection.go:156] attempting to acquire leader lease...
I0709 18:22:35.483856       1 leaderelection.go:178] successfully acquired lease to provision for pvc default/test-01
I0709 18:22:35.483970       1 controller.go:1073] scheduleOperation[provision-default/test-01[0d830c30-83a5-11e8-b1fa-001a4a1601ad]]
I0709 18:22:35.487773       1 provision.go:73] About to provision a disk name: pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad domain: TELKOM-DIG-STD01 size: 1000000000 format: cow 
{"name":"pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad","provisioned_size":"1000000000","format":"cow","storage_domains":{"storage_domain":[{"name":"TELKOM-DIG-STD01"}]}}
I0709 18:22:35.939076       1 controller.go:806] volume "pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad" for claim "default/test-01" created
I0709 18:22:35.958027       1 controller.go:823] volume "pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad" for claim "default/test-01" saved
I0709 18:22:35.958089       1 controller.go:859] volume "pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad" provisioned for claim "default/test-01"
I0709 18:22:37.499882       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc default/test-01, task succeeded
I0709 18:27:41.319249       1 controller.go:1073] scheduleOperation[delete-pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad[0dcfe1d6-83a5-11e8-b1fa-001a4a1601ad]]
I0709 18:27:41.329804       1 provision.go:129] About to delete disk pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad id 5ea0c6a4-29af-4305-9134-00c8d4ebeada
I0709 18:27:42.103384       1 controller.go:1049] volume "pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad" deleted
I0709 18:27:42.112623       1 controller.go:1060] volume "pvc-0d830c30-83a5-11e8-b1fa-001a4a1601ad" deleted from database
I0709 18:29:25.475056       1 controller.go:1073] scheduleOperation[lock-provision-default/registry[01e8c6ec-83a6-11e8-b1fa-001a4a1601ad]]
I0709 18:29:25.492623       1 controller.go:1073] scheduleOperation[lock-provision-default/registry[01e8c6ec-83a6-11e8-b1fa-001a4a1601ad]]
I0709 18:29:25.500326       1 leaderelection.go:156] attempting to acquire leader lease...
I0709 18:29:25.516698       1 leaderelection.go:178] successfully acquired lease to provision for pvc default/registry
I0709 18:29:25.516804       1 controller.go:1073] scheduleOperation[provision-default/registry[01e8c6ec-83a6-11e8-b1fa-001a4a1601ad]]
I0709 18:29:25.519912       1 provision.go:73] About to provision a disk name: pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad domain: TELKOM-DIG-STD01 size: 107374182400 format: cow 
{"name":"pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad","provisioned_size":"107374182400","format":"cow","storage_domains":{"storage_domain":[{"name":"TELKOM-DIG-STD01"}]}}
I0709 18:29:25.889474       1 controller.go:806] volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" for claim "default/registry" created
I0709 18:29:25.895565       1 controller.go:823] volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" for claim "default/registry" saved
I0709 18:29:25.895637       1 controller.go:859] volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" provisioned for claim "default/registry"
I0709 18:29:27.533398       1 leaderelection.go:198] stopped trying to renew lease to provision for pvc default/registry, task succeeded
W0709 18:34:20.278633       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:379: watch of *v1.StorageClass ended with: The resourceVersion for the provided watch is too old.
W0709 18:37:27.247846       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:411: watch of *v1.PersistentVolumeClaim ended with: The resourceVersion for the provided watch is too old.
W0709 18:38:35.295999       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:412: watch of *v1.PersistentVolume ended with: The resourceVersion for the provided watch is too old.
W0709 18:40:26.358849       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:379: watch of *v1.StorageClass ended with: The resourceVersion for the provided watch is too old.
W0709 18:47:54.315609       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:412: watch of *v1.PersistentVolume ended with: The resourceVersion for the provided watch is too old.
W0709 18:52:57.451157       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:379: watch of *v1.StorageClass ended with: The resourceVersion for the provided watch is too old.
W0709 18:53:01.314330       1 reflector.go:323] github.com/ovirt/ovirt-openshift-extensions/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:411: watch of *v1.PersistentVolumeClaim ended with: The resourceVersion for the provided watch is too old.

@rgolangh rgolangh added the bug label Jul 10, 2018
@rgolangh rgolangh self-assigned this Jul 10, 2018
@rgolangh
Copy link
Contributor

rgolangh commented Jul 10, 2018

I need some more logs from the node but I think I know what's going on, the flexdriver on the node does not send the vm name it should attach the disk too. Can you check the driver conf file on node 091:
/usr/libexec/kubernetes/kubelet-plugins/volume/exec//ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver.conf

It should have a conf item ovirtVmName=nbschv091.telkom.co.za

If its not there then something in the deployment was missing. How did you deploy it, using the apb?

I guess that the best way is to auto detect it from a node lablel and get it as a env var for the deploying container to copy it over to the conf.

@righteousjester
Copy link
Author

Here are the contents of that file:

cat /usr/libexec/kubernetes/kubelet-plugins/volume/exec//ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver.conf
url=https://nbschv109.telkom.co.za/ovirt-engine/api
username=ovirt-openshift-extensions@internal
password=XXXXXXXXX
insecure=True
cafile=

I deployed it the manual way, with the following command:

docker run \
 --rm \
 --net=host \
 -v $HOME/.kube:/opt/apb/.kube:z \
 -u $UID docker.io/rgolangh/ovirt-flexvolume-driver-apb \
 provision \
 -e admin_password=$OCP_PASS -e admin_user=$OCP_USER \
 -e cluster=openshift -e namespace=default \
 -e engine_password=$ENGINE_PASS -e engine_url=$ENGINE_URL \
 -e engine_username=$ENGINE_USER

The environment is tightly locked down, so this was the easiest way to get started.

I can try redeploy using the apb, if you believe that will help.

I can provide more logs, but it seems like you were correct already.
Let me know if you still want me to attach the logs.

@rgolangh
Copy link
Contributor

rgolangh commented Jul 10, 2018

I'll make the deployment better and the discovery of ovirtVmName seamless.
edit: This issue has been addressed by #52

Is it working now that you set it?

@righteousjester
Copy link
Author

Does seem to be working.

Events:
  Type     Reason                 Age   From                             Message
  ----     ------                 ----  ----                             -------
  Normal   Scheduled              1m    default-scheduler                Successfully assigned docker-registry-6-26xgw to nbschv091.telkom.co.za
  Warning  FailedAttachVolume     1m    attachdetach-controller          AttachVolume.Attach failed for volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" : invalid character 'j' looking for beginning of value
  Normal   SuccessfulMountVolume  1m    kubelet, nbschv091.telkom.co.za  MountVolume.SetUp succeeded for volume "registry-certificates"
  Normal   SuccessfulMountVolume  1m    kubelet, nbschv091.telkom.co.za  MountVolume.SetUp succeeded for volume "registry-token-d9rqk"

This is the error that stands out in the logs on that server

Jul 10 10:44:57 nbschv091.telkom.co.za atomic-openshift-node[17735]: I0710 10:44:57.497081   17735 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" (UniqueName: "flexvolume-ovirt/ovirt-flexvolume-driver/pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad") pod "docker-registry-6-26xgw" (UID: "3d506df9-841d-11e8-b1fa-001a4a1601ad")
Jul 10 10:44:57 nbschv091.telkom.co.za atomic-openshift-node[17735]: E0710 10:44:57.500812   17735 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad\"" failed. No retries permitted until 2018-07-10 10:46:59.500770495 +0200 SAST m=+508165.685225626 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad\") pod \"docker-registry-6-26xgw\" (UID: \"3d506df9-841d-11e8-b1fa-001a4a1601ad\") "

File looks like this now

sudo cat /usr/libexec/kubernetes/kubelet-plugins/volume/exec//ovirt~ovirt-flexvolume-driver/ovirt-flexvolume-driver.conf
url=https://nbschv109.telkom.co.za/ovirt-engine/api
username=ovirt-openshift-extensions@internal
password=XXXXXX
insecure=True
cafile=
ovirtVmName=nbschv091.telkom.co.za

Would i need to reload any of the ovirt pods?

@righteousjester
Copy link
Author

Additional error in the oc describe po

Events:
  Type     Reason                 Age              From                             Message
  ----     ------                 ----             ----                             -------
  Normal   Scheduled              3m               default-scheduler                Successfully assigned docker-registry-6-26xgw to nbschv091.telkom.co.za
  Normal   SuccessfulMountVolume  3m               kubelet, nbschv091.telkom.co.za  MountVolume.SetUp succeeded for volume "registry-certificates"
  Normal   SuccessfulMountVolume  3m               kubelet, nbschv091.telkom.co.za  MountVolume.SetUp succeeded for volume "registry-token-d9rqk"
  Warning  FailedAttachVolume     1m (x2 over 3m)  attachdetach-controller          AttachVolume.Attach failed for volume "pvc-01e8c6ec-83a6-11e8-b1fa-001a4a1601ad" : invalid character 'j' looking for beginning of value
  Warning  FailedMount            1m               kubelet, nbschv091.telkom.co.za  Unable to mount volumes for pod "docker-registry-6-26xgw_default(3d506df9-841d-11e8-b1fa-001a4a1601ad)": timeout expired waiting for volumes to attach/mount for pod "default"/"docker-registry-6-26xgw". list of unattached/unmounted volumes=[registry-storage]

@rgolangh
Copy link
Contributor

No you don't need to refresh the pods, it should picked that up.

Is nbschv091.telkom.co.za matches the name of the vm in ovirt?

@righteousjester
Copy link
Author

Yes.
Happy to run some ovirt-engine commands to double check for you.

@rgolangh
Copy link
Contributor

Obviously that annoying 'j' character is coming from the error while invoking the ovirt-api so first
I want to see more of the journal log on the node see more of that error.
Also the engine log , /var/log/ovirt-engine/engine.log would reveal the problem probably.
Also you can run this from your node:

curl -k -u ovirt-openshift-external@admin:PASS https://nbschv109.telkom.co.za/ovirt-engine/api/vms?search=name=nbschv091.telkom.co.za

curl -k -u ovirt-openshift-external@admin:PASS https://nbschv109.telkom.co.za/ovirt-engine/api/disks

@righteousjester
Copy link
Author

curl -k -u ovirt-openshift-extensions@internal:XXXXX https://nbschv109.telkom.co.za/ovirt-engine/api/vms?search=name=nbschv091.telkom.co.za

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<vms>
    <vm href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b" id="8bf5a849-6df2-4d3b-9647-3ba33d4e905b">
        <actions>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/detach" rel="detach"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/export" rel="export"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/ticket" rel="ticket"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/migrate" rel="migrate"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/cancelmigration" rel="cancelmigration"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/logon" rel="logon"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/clone" rel="clone"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/freezefilesystems" rel="freezefilesystems"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/maintenance" rel="maintenance"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/previewsnapshot" rel="previewsnapshot"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/reordermacaddresses" rel="reordermacaddresses"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/thawfilesystems" rel="thawfilesystems"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/undosnapshot" rel="undosnapshot"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/reboot" rel="reboot"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/commitsnapshot" rel="commitsnapshot"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/shutdown" rel="shutdown"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/start" rel="start"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/stop" rel="stop"/>
            <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/suspend" rel="suspend"/>
        </actions>
        <name>nbschv091.telkom.co.za</name>
        <comment>Openshift Infra Node 01</comment>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/katelloerrata" rel="katelloerrata"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/permissions" rel="permissions"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/tags" rel="tags"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/hostdevices" rel="hostdevices"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/numanodes" rel="numanodes"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/reporteddevices" rel="reporteddevices"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/sessions" rel="sessions"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/snapshots" rel="snapshots"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/applications" rel="applications"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/affinitylabels" rel="affinitylabels"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/diskattachments" rel="diskattachments"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/graphicsconsoles" rel="graphicsconsoles"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/cdroms" rel="cdroms"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/nics" rel="nics"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/watchdogs" rel="watchdogs"/>
        <link href="/ovirt-engine/api/vms/8bf5a849-6df2-4d3b-9647-3ba33d4e905b/statistics" rel="statistics"/>
        <bios>
            <boot_menu>
                <enabled>false</enabled>
            </boot_menu>
        </bios>
        <cpu>
            <architecture>x86_64</architecture>
            <topology>
                <cores>4</cores>
                <sockets>2</sockets>
                <threads>1</threads>
            </topology>
        </cpu>
        <cpu_shares>0</cpu_shares>
        <creation_time>2018-05-25T11:42:54.609+02:00</creation_time>
        <delete_protected>false</delete_protected>
        <display>
            <address>10.145.209.74</address>
            <allow_override>true</allow_override>
            <copy_paste_enabled>true</copy_paste_enabled>
            <disconnect_action>LOCK_SCREEN</disconnect_action>
            <file_transfer_enabled>true</file_transfer_enabled>
            <monitors>1</monitors>
            <port>5901</port>
            <secure_port>5902</secure_port>
            <single_qxl_pci>false</single_qxl_pci>
            <smartcard_enabled>false</smartcard_enabled>
            <type>spice</type>
        </display>
        <high_availability>
            <enabled>true</enabled>
            <priority>1</priority>
        </high_availability>
        <initialization>
            <nic_configurations/>
            <regenerate_ssh_keys>false</regenerate_ssh_keys>
        </initialization>
        <io>
            <threads>0</threads>
        </io>
        <large_icon href="/ovirt-engine/api/icons/b8cc895b-03e8-7241-98e9-9093550e6f99" id="b8cc895b-03e8-7241-98e9-9093550e6f99"/>
        <memory>17179869184</memory>
        <memory_policy>
            <guaranteed>17179869184</guaranteed>
            <max>68719476736</max>
        </memory_policy>
        <migration>
            <auto_converge>inherit</auto_converge>
            <compressed>inherit</compressed>
        </migration>
        <migration_downtime>-1</migration_downtime>
        <origin>ovirt</origin>
        <os>
            <boot>
                <devices>
                    <device>hd</device>
                </devices>
            </boot>
            <type>rhel_7x64</type>
        </os>
        <placement_policy>
            <affinity>migratable</affinity>
        </placement_policy>
        <small_icon href="/ovirt-engine/api/icons/c4d2cd16-073d-066e-0151-5c9406a58100" id="c4d2cd16-073d-066e-0151-5c9406a58100"/>
        <sso>
            <methods>
                <method id="guest_agent"/>
            </methods>
        </sso>
        <start_paused>false</start_paused>
        <stateless>false</stateless>
        <storage_error_resume_behaviour>auto_resume</storage_error_resume_behaviour>
        <time_zone>
            <name>Africa/Johannesburg</name>
        </time_zone>
        <type>server</type>
        <usb>
            <enabled>false</enabled>
        </usb>
        <cluster href="/ovirt-engine/api/clusters/5afd3646-019e-0224-0368-0000000001c3" id="5afd3646-019e-0224-0368-0000000001c3"/>
        <cpu_profile href="/ovirt-engine/api/cpuprofiles/5afd364d-023d-0143-0361-000000000137" id="5afd364d-023d-0143-0361-000000000137"/>
        <quota id="5afd3665-00d8-014c-0119-000000000267"/>
        <fqdn>nbschv091.telkom.co.za</fqdn>
        <guest_operating_system>
            <architecture>x86_64</architecture>
            <codename>Maipo</codename>
            <distribution>Red Hat Enterprise Linux Server</distribution>
            <family>Linux</family>
            <kernel>
                <version>
                    <build>0</build>
                    <full_version>3.10.0-862.3.3.el7.x86_64</full_version>
                    <major>3</major>
                    <minor>10</minor>
                    <revision>862</revision>
                </version>
            </kernel>
            <version>
                <full_version>7.5</full_version>
                <major>7</major>
                <minor>5</minor>
            </version>
        </guest_operating_system>
        <guest_time_zone>
            <name>Africa/Johannesburg</name>
            <utc_offset>+02:00</utc_offset>
        </guest_time_zone>
        <next_run_configuration_exists>false</next_run_configuration_exists>
        <numa_tune_mode>interleave</numa_tune_mode>
        <run_once>false</run_once>
        <start_time>2018-07-03T14:44:11.088+02:00</start_time>
        <status>up</status>
        <stop_time>2018-07-03T14:42:56.542+02:00</stop_time>
        <host href="/ovirt-engine/api/hosts/23c86b32-de25-4bfa-a011-68f684c85d9e" id="23c86b32-de25-4bfa-a011-68f684c85d9e"/>
        <original_template href="/ovirt-engine/api/templates/ab32ec26-fe14-40fc-8198-ebe44a5850bc" id="ab32ec26-fe14-40fc-8198-ebe44a5850bc"/>
        <template href="/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/>
    </vm>
</vms>

@righteousjester
Copy link
Author

curl -k -u ovirt-openshift-extensions@internal:XXXX https://nbschv109.telkom.co.za/ovirt-engine/api/disks

Is rather large, so here is the file.

disks.xml.gz

@righteousjester
Copy link
Author

Here is the journal of today for the node: nbschv091.telkom.co.za

journal.today.log.gz

@righteousjester
Copy link
Author

engine.logs

engine.log.gz

@rgolangh
Copy link
Contributor

rgolangh commented Aug 1, 2018

@righteousjester Are you still getting this? I can't reproduce this with openshift 3.9.0
Also I don't see the error in the journal log. Perhaps it is not the journal of the node?

@righteousjester
Copy link
Author

I am going to redeploy everything again to see if the error has cleared.

BTW, I tried the automatic deploy and it fails on this step

Successfully built aeafa394b231
make[1]: Leaving directory `/home/openshift/.tmp/github-ovirt-openshift-extensions/ovirt-openshift-extensions-master/deployment/ovirt-flexvolume-driver-apb'
make -C deployment/ovirt-flexvolume-driver-apb/ apb_push
make[1]: Entering directory `/home/openshift/.tmp/github-ovirt-openshift-extensions/ovirt-openshift-extensions-master/deployment/ovirt-flexvolume-driver-apb'
docker run --rm --privileged -v /home/openshift/.tmp/github-ovirt-openshift-extensions/ovirt-openshift-extensions-master/deployment/ovirt-flexvolume-driver-apb:/mnt:z -v /root/.kube:/.kube:z -v /var/run/docker.sock:/var/run/docker.sock -u 1000 docker.io/ansibleplaybookbundle/apb-tools:latest push
Exception occurred! [Errno 13] Permission denied: '/.kube/config'
make[1]: *** [apb_push] Error 1
make[1]: Leaving directory `/home/openshift/.tmp/github-ovirt-openshift-extensions/ovirt-openshift-extensions-master/deployment/ovirt-flexvolume-driver-apb'
make: *** [apb_push] Error 2

I am running as root, but it looks like this path /.kube/config is missing a ~

Will redeploy it following the manual steps for now

@rgolangh
Copy link
Contributor

rgolangh commented Aug 1, 2018

I think you bumped into #30

Is this a problem with the README that is not updated?

@righteousjester
Copy link
Author

The README file looks fine.

 -v $HOME/.kube:/opt/apb/.kube:z \

I don't think it is pulling through the $HOME. when running:

make apb_build apb_push

But can leave that for another discussion.
I'm still busy redeploying everything.

@righteousjester
Copy link
Author

Still getting the error.
To confirm, the PV & PVC are created successfully.
And I can see the oVirt disk is created successfully.
The issue is, it is not attaching.

Logs from pod

  Normal   Scheduled              2m                default-scheduler                Successfully assigned mysql-2-hjz5f to nbschv095.telkom.co.za
  Normal   SuccessfulMountVolume  2m                kubelet, nbschv095.telkom.co.za  MountVolume.SetUp succeeded for volume "data"
  Normal   SuccessfulMountVolume  2m                kubelet, nbschv095.telkom.co.za  MountVolume.SetUp succeeded for volume "default-token-7rr69"
  Warning  FailedMount            31s               kubelet, nbschv095.telkom.co.za  Unable to mount volumes for pod "mysql-2-hjz5f_lsd-dev(38afd3e8-95b6-11e8-aa51-001a4a1601ad)": timeout expired waiting for volumes to attach/mount for pod "lsd-dev"/"mysql-2-hjz5f". list of unattached/unmounted volumes=[test01]
  Warning  FailedAttachVolume     23s (x9 over 2m)  attachdetach-controller          AttachVolume.Attach failed for volume "pvc-f3a661b1-95ad-11e8-aa51-001a4a1601ad" : invalid character 'j' looking for beginning of value

Here is the deployment of the ovirt extension

oc get po -o wide -n ovirt-openshift-extensions 
NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
ovirt-flexvolume-driver-r5m8k               1/1       Running   0          1h        10.143.38.155   nbschv096.telkom.co.za
ovirt-flexvolume-driver-tb6vx               1/1       Running   0          1h        10.143.51.199   nbschv094.telkom.co.za
ovirt-flexvolume-driver-x5t28               1/1       Running   0          1h        10.143.57.241   nbschv095.telkom.co.za
ovirt-volume-provisioner-66f8cc6fc5-8dm9k   1/1       Running   0          1h        10.143.51.200   nbschv094.telkom.co.za

If I look at the logs on that node 095, I see the errors:

Aug 01 20:15:00 nbschv095.telkom.co.za atomic-openshift-node[3690]: E0801 20:15:00.215669    3690 nestedpendingoperations.go:267] Operation for "\"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-f3a661b1-95ad-11e8-aa51-001a4a1601ad\"" failed. No retries permitted until 2018-08-01 20:17:02.215622134 +0200 SAST m=+383020.741232731 (durationBeforeRetry 2m2s). Error: "Volume not attached according to node status for volume \"pvc-f3a661b1-95ad-11e8-aa51-001a4a1601ad\" (UniqueName: \"flexvolume-ovirt/ovirt-flexvolume-driver/pvc-f3a661b1-95ad-11e8-aa51-001a4a1601ad\") pod \"mysql-2-hjz5f\" (UID: \"38afd3e8-95b6-11e8-aa51-001a4a1601ad\") "

Will attach the logs from that node for only a day ago, using journalctl --since "1 days ago" so you are swamped with logs.

@righteousjester
Copy link
Author

@rgolangh
Copy link
Contributor

rgolangh commented Aug 7, 2018

I don't see the flexdriver in action in the log at all. Could it be that the kubelet is running in a container?
Anyhow I fixed several issues, improved the deployment (now you don't need to add the ovirtVmName to the config) and few more logging statements will help us to see more. If you want we can chat on #ovirt room at
irc.oftc.net, my nickname is rgolan there.

@jasonbrooks
Copy link

I'm having similar issues -- pv and pvc is creating, I see the disk in ovirt, but it doesn't attach to my node vm. Also, I've had no luck getting apb to work at all. I'll try to catch you on irc -- I'd really like to get this working. I'm currently using ovirt 4.2.5 and origin 3.10.

@rgolangh
Copy link
Contributor

rgolangh commented Aug 9, 2018

How are you guys deploying the nodes? what's the output of docker ps | grep node
Could it be that both of you have the kubelet running as a pod (a.k.a containerised kubelet?) in that
case the plugin would have to be copied some other way because the kubelet as a container will
on see the host's plugin directory. Also I'm there is a bug that the containerized kubelet doesn't load
the flexvolume plugins. This should be fixed in 3.11 I think.

@jasonbrooks
Copy link

My nodes are running on regular (not atomic) CentOS 7 VMs. origin-node is installed as an rpm on each of the nodes. I didn't specify a containerized install, so don't think the kubelets are containerized.

@jasonbrooks
Copy link

@righteousjester This is working for me now. I do see an invalid character error, but then the volume mount succeeds.

@rgolangh
Copy link
Contributor

Every debugging attempt should include the logs of the kube-controller-manager pod
running under kube-system namespace. That logs shows the initialization of the driver and the callouts. An error like in the description should be clearly visible there.

@rgolangh
Copy link
Contributor

rgolangh commented Sep 5, 2018

I'm going to close this, I think this is solved from v0.3.2 onward, probably even earlier.

@rgolangh rgolangh closed this as completed Sep 5, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants