Skip to content
This repository has been archived by the owner on Nov 9, 2022. It is now read-only.
This repository is currently being migrated. It's locked while the migration is in progress.

Which fsTypes are supported #90

Open
kaedwen opened this issue Mar 28, 2019 · 11 comments
Open

Which fsTypes are supported #90

kaedwen opened this issue Mar 28, 2019 · 11 comments
Labels
bug Something isn't working

Comments

@kaedwen
Copy link

kaedwen commented Mar 28, 2019

I have a zpool mounted on /var/lib/storageos.

Using ext4 as fsType crashes when the pvc is used and the final mount -t ext4 .. is executed.
I do not find any other supported fsTypes. I have tried setting zfs as fsType without luck.

Is is possible in general or do I have to use ext4?

@avestuk
Copy link
Contributor

avestuk commented Mar 28, 2019

Hi @hei-pa we do not support zfs as a fsType. We support the following fsTypes: ext2, ext3, ext4, btrfs and xfs

https://docs.storageos.com/docs/concepts/volumes

@kaedwen
Copy link
Author

kaedwen commented Mar 28, 2019

ok thanks for the info

zfs can be mounted legacy like other filesystems. So mount -t zfs ... does work. What are the limitations this is not supported?

@avestuk
Copy link
Contributor

avestuk commented Mar 28, 2019

@hei-pa could you perhaps share any logs you have about the crash with us? For instance what did the PVC events log show? If it's easier to do so you can send the crash reports to [email protected]

With regards to what filesystems we support we are agnostic to what file system is mounted on /var/lib/storageos. The fsType of the StorageClass has been designed to be pluggable such that if enough people show interest in using ZFS then we would support the creation of ZFS volumes with StorageOS.

Out of curiosity, I did try to create a volume with a storage class that had fsType set to zfs in my own cluster and I got the following error:

Warning    ProvisioningFailed  8s (x2 over 20s)  persistentvolume-controller  Failed to provision volume with StorageClass "zfs": API error (Server failed to process your request. Was the data correct?): couldnt process fs type: fs type not valid

That makes me wonder whether Kubernetes supports ZFS PVs at all. That's something I haven't yet been able to find an answer to so I will get back to you on that.

@kaedwen
Copy link
Author

kaedwen commented Mar 28, 2019

Ok I have not that deep knowledge of kubernetes and pvc at all. I have setup a single master node to evaluate a little bit.

My setup with zpools as root filesystem or data pools mounted to /var/lib/storageos sounds not that uncommon.

Is it possible to delete the current StorageClass and create a new one without reinstalling the whole storageos part? Because the default setup of storageos-operator creates a StorageClass with ext4.


Now with a new cluster and a storageclass with fsType zfs i get the following ... don't know if something else is wrong here

Name:          keeweb
Namespace:     keeweb
StorageClass:  fast
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class: fast
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/storageos
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Events:
  Type       Reason              Age                 From                         Message
  ----       ------              ----                ----                         -------
  Warning    ProvisioningFailed  26s (x6 over 116s)  persistentvolume-controller  Failed to provision volume with StorageClass "fast": invalid node format: lookup : no such host
Mounted By:  <none>```

@avestuk
Copy link
Contributor

avestuk commented Mar 28, 2019

@hei-pa So using zpools or data pools as the file system mounted under /var/lib/storageos should not be an issue.

You can have multiple Storage Classes and specify which you wish to use when you create PVCs. You can create a new Storage Class and change the fsType parameter to one that you want. kubectl get sc fast -o yaml > /tmp/sc.yaml would save the storage class out to /tmp/sc.yaml so you could edit the file and then kubectl apply -f /tmp/sc.yaml to create a Storage Class that used zfs.

If you have reinstalled then I'd suggest you create a storage class with an fsType of ext4 and test whether you can provision volumes like that before you try and create a Storage Class that uses zfs

@kaedwen
Copy link
Author

kaedwen commented Mar 28, 2019

yes i have deleted the sc fast and it was recreated from storageos

with this one (have ext4 as fsType) is looks good to me

Name:          keeweb
Namespace:     keeweb
StorageClass:  fast
Status:        Bound
Volume:        pvc-430fd182-5169-11e9-9d91-001999e205fd
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: fast
               volume.beta.kubernetes.io/storage-provisioner: storageos
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age   From                                                                    Message
  ----       ------                 ----  ----                                                                    -------
  Normal     ExternalProvisioning   60s   persistentvolume-controller                                             waiting for a volume to be created, either by external provisioner "storageos" or manually created by system administrator
  Normal     Provisioning           60s   storageos_storageos-statefulset-0_1c021d7b-5169-11e9-81bd-0a580a00002c  External provisioner is provisioning volume for claim "keeweb/keeweb"
  Normal     ProvisioningSucceeded  59s   storageos_storageos-statefulset-0_1c021d7b-5169-11e9-81bd-0a580a00002c  Successfully provisioned volume pvc-430fd182-5169-11e9-9d91-001999e205fd
Mounted By:  <none>```

Now using that pvc fails because the created storage can not be mounted (obviously because it on a zpool)

```root@srv-01:/var/lib/storageos# kubectl describe pod -n keeweb keeweb
Name:               keeweb
Namespace:          keeweb
Priority:           0
PriorityClassName:  <none>
Node:               srv-01/192.168.20.11
Start Time:         Thu, 28 Mar 2019 15:55:18 +0100
Labels:             app=keeweb
Annotations:        <none>
Status:             Pending
IP:                 
Containers:
  keeweb:
    Container ID:   
    Image:          antelle/keeweb
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/nginx/external from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-p4cdn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  keeweb
    ReadOnly:   false
  default-token-p4cdn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-p4cdn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               116s                default-scheduler        Successfully assigned keeweb/keeweb to srv-01
  Normal   SuccessfulAttachVolume  116s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-430fd182-5169-11e9-9d91-001999e205fd"
  Warning  FailedMount             45s (x8 over 110s)  kubelet, srv-01          MountVolume.SetUp failed for volume "pvc-430fd182-5169-11e9-9d91-001999e205fd" : rpc error: code = Unknown desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t ext4 -o defaults /var/lib/storageos/volumes/376b7983-6a72-f59b-945a-be223b5d93b4 /var/lib/kubelet/pods/808b363e-5169-11e9-9d91-001999e205fd/volumes/kubernetes.io~csi/pvc-430fd182-5169-11e9-9d91-001999e205fd/mount
Output: mount: wrong fs type, bad option, bad superblock on /var/lib/storageos/volumes/376b7983-6a72-f59b-945a-be223b5d93b4,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.```

@avestuk
Copy link
Contributor

avestuk commented Mar 28, 2019

@hei-pa Could you show me what the StorageClass you are using looks like?

Also could you try and create a volume using the StorageOS CLI? storageos volume create test and then use the CLI to mount that volume on one of your node storageos volume mount test /mnt
https://docs.storageos.com/docs/reference/cli/

@kaedwen
Copy link
Author

kaedwen commented Mar 28, 2019

i have a zfs storageclass after i have followed your advice to extract the original with -o yaml >

kind: StorageClass
metadata:
  labels:
    app: storageos
  name: zfs
parameters:
  csi.storage.k8s.io/fstype: zfs
  pool: default
provisioner: storageos
reclaimPolicy: Delete
volumeBindingMode: Immediate

Now using this one in a pvc results in

  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               45m                 default-scheduler        Successfully assigned keeweb/keeweb to srv-01
  Normal   SuccessfulAttachVolume  45m                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-f003cc31-5172-11e9-9d91-001999e205fd"
  Warning  FailedMount             14m (x23 over 45m)  kubelet, srv-01          MountVolume.SetUp failed for volume "pvc-f003cc31-5172-11e9-9d91-001999e205fd" : rpc error: code = Unknown desc = exec: "mkfs.zfs": executable file not found in $PATH

There is no mkfs.zfs ... it's not working that way.

Now trying the storageos cli results in similar errors

root@srv-01:~/kube/storage-os# storageos -D volume list
NAMESPACE/NAME                                   SIZE  MOUNT   SELECTOR  STATUS  REPLICAS  LOCATION
default/test                                     5GiB  srv-01            active  0/0       srv-01 (healthy)
root@srv-01:/# storageos -D volume mount test /mnt
DEBU[0000] StorageOS volume ready: /mnt                 
DEBU[0000] Mountpoint created: /mnt                     
ERRO[0000] fail to get output from command               args="[-t ext4 /var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860 /mnt]" cmd=/bin/mount error="exit status 32"
ERRO[0000] Mount failed                                  error="exit status 32" fs_type=ext4 mount_point=/mnt path=/var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860
failed to mount volume, beginning retry 1

It would be fine for me if storageos creates ext4 filesystem blobs, but the underlaying filesystem has to be a zpool because I have no other.

If I have a look to the disk that should be mounted

root@srv-01:~/kube/storage-os# fdisk /var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860

Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x5ed3d878.

Command (m for help): p
Disk /var/lib/storageos/volumes/81c34da8-4012-7b3d-bbb6-47365daf5860: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x5ed3d878

There is no partition? It's the whole disk? Is that correct? Something like this can not be mounted.

@avestuk
Copy link
Contributor

avestuk commented Mar 29, 2019

@hei-pa I checked with colleagues and when using CSI StorageOS actually does the formatting of the block device. However we do not currently support ZFS filesystems so this is not possible.

On the subject of the partitions we do not create partitions on our volumes, however the lack of partitions does not prevent the file system from being mounted. In the output below I have mounted the volume default/pvc-52e8d3e6-517f-11e9-99f1-0681640e8ccc on /mnt.

[root@alexv-rancher-nodes1 devices]# /usr/local/bin/storageos v inspect default/pvc-52e8d3e6-517f-11e9-99f1-0681640e8ccc
[
    {
        "id": "8e594ea5-2d12-5e2b-eb6e-255909917a50",
        "inode": 117071,
        "name": "pvc-52e8d3e6-517f-11e9-99f1-0681640e8ccc",
        "size": 5,
        "pool": "default",
        "fsType": "ext4",
        "description": "",
        "labels": {
            "fsType": "zfs",
            "storageos.com/presentation": "mounted"
        },
        "namespace": "default",
        "nodeSelector": "",
        "master": {
            "id": "5d51f16a-ef21-fe7a-6ae3-5d6e5fafe78e",
            "inode": 115676,
            "node": "cc07a8ea-3781-0ae7-4339-0cc9fd10476b",
            "nodeName": "alexv-rancher-nodes1",
            "health": "healthy",
            "status": "active",
            "createdAt": "2019-03-28T17:44:22.73482705Z"
        },
        "mounted": true,
        "mountDevice": "/var/lib/kubelet/volumeplugins/kubernetes.io~storageos/devices/8e594ea5-2d12-5e2b-eb6e-255909917a50",
        "mountpoint": "/mnt",
        "mountedAt": "2019-03-28T17:46:15.177183536Z",
        "mountedBy": "alexv-rancher-nodes1",
        "replicas": [],
        "health": "healthy",
        "status": "active",
        "statusMessage": "replica 5d51f16a-ef21-fe7a-6ae3-5d6e5fafe78e was synced with master at 2019-03-28 17:44:28.191259681 +0000 UTC m=+1150.641302991",
        "mkfsDone": true,
        "mkfsDoneAt": "2019-03-28T17:32:14.613382586Z",
        "createdAt": "2019-03-28T17:31:30.551878926Z",
        "createdBy": ""
    }
]
[root@alexv-rancher-nodes1 devices]# fdisk 8e594ea5-2d12-5e2b-eb6e-255909917a50 -l

Disk 8e594ea5-2d12-5e2b-eb6e-255909917a50: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Could you try and do the following and share the output?
storageos volume create test2 -f ext4
storageos -D volume mount test2 /mnt

@kaedwen
Copy link
Author

kaedwen commented Mar 30, 2019

Ok got the partition thing.

Here is the output

root@srv-01:~/kube/storage-os# storageos volume create test2 -f ext4
default/test2
root@srv-01:~/kube/storage-os# storageos -D volume mount test2 /mnt
DEBU[0000] volume found: /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64 
DEBU[0001] checking volume for existing filesystem: /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64: output: /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64: data 
DEBU[0001] volume /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64 has fs type: raw 
DEBU[0001] creating ext4 filesystem on volume /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64 
ERRO[0001] fail to get output from command               args="[-F -U dfa755b3-d448-ff6a-3933-3691b3a82e64 -b 4096 -E lazy_itable_init=1,lazy_journal_init=1 /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64]" cmd=/sbin/mkfs.ext4 error="exit status 5"
WARN[0001] create filesystem failed, retrying in 1s      err="exit status 5" fstype=ext4 path=/var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64

So it does not work to force a ext4 blob file for some reason

I have tried the command and got this error

root@srv-01:~/kube/storage-os# mkfs.ext4 -F -U dfa755b3-d448-ff6a-3933-3691b3a82e64 -b 4096 -E lazy_itable_init=1,lazy_journal_init=1 /var/lib/storageos/volumes/dfa755b3-d448-ff6a-3933-3691b3a82e64
mke2fs 1.43.4 (31-Jan-2017)
Creating filesystem with 1310720 4k blocks and 327680 inodes
Filesystem UUID: dfa755b3-d448-ff6a-3933-3691b3a82e64
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information:      
Warning, had trouble writing out superblocks.

Edit

I have tried creating a file with ext4 signature in /var/lib/storageos (the zfs dataset mountpoint) and everything is working.

Then I did the same inside /var/lib/storageos/volumes and this did not work

root@srv-01:/var/lib/storageos/volumes# dd if=/dev/zero of=blob bs=4k count=600
dd: failed to open 'blob': Function not implemented

Whats wrong with the volumes directory? Special thing?


Ok there is something mounted to volumes

storageos on /var/lib/storageos/volumes type fuse.storageos (rw,nosuid,noexec,noatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

@avestuk
Copy link
Contributor

avestuk commented Apr 1, 2019

@hei-pa Thanks for doing that. I've created a ticket internally for someone to recreate this issue however I do not know when the development team will be able to spend time working on this issue. In the meantime I'd suggest that you move /var/lib/storageos to a file system that is not ZFS formatted.

If you've got any questions in future feel free to send me a message on our public slack

@avestuk avestuk added the bug Something isn't working label Apr 1, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants