You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
When warm migrating a VM from vSphere using Forklift that has 1x 10GiB disk and 1x 1TiB disk, the migration fails to import the smaller disk during the final disk import stage. The error from the cdi importer pod is virtual image size 10737418240 is larger than the reported available storage 9428545536. A larger PVC is required. Unable to resize disk image to requested size.
I have filesystem overhead set to 15% for this storage class, and it is being respected by CDI - the DataVolume is created with size = 10GiB, and the corresponding PVC is 12048MiB.
Despite the ~2GiB filesystem overhead, the reported available storage is smaller than the disk being imported.
When migrating the same VM using a 25% overhead, the import succeeds.
What you expected to happen:
15% filesystem overhead should be sufficient to import the 10GiB disk. More broadly, a fixed % overhead should work for all sizes of disk.
How to reproduce it (as minimally and precisely as possible):
DataVolume Spec:
Additional context:
I understand that a certain % overhead is needed to import when using Filesystem volumeMode, but I've observed that the % needs to be increased as virtual disk size increases. My understanding is that, a given % should be sufficient across all disk sizes, since the actual overhead bytes would increase proportional to the virtual disk.
Some other examples we've observed:
16% overhead is sufficient for a 80GB VM, not sufficient for a 500GB VM.
20% overhead not sufficient for a 2TB VM.
Perhaps I am missing something, why should % overhead have to increase as disk size increases. Shouldnt the relative nature of a percentage account for this? Any guidance about how these % are used, especially in the context of multistage imports is greatly appreciated 🙏
Environment:
CDI version (use kubectl get deployments cdi-deployment -o yaml): 1.58.0
Kubernetes version (use kubectl version): 1.29.7
DV specification: N/A
Cloud provider or hardware configuration: N/A
OS (e.g. from /etc/os-release): N/A
Kernel (e.g. uname -a): N/A
Install tools: N/A
Others: N/A
The text was updated successfully, but these errors were encountered:
What happened:
When warm migrating a VM from vSphere using Forklift that has 1x 10GiB disk and 1x 1TiB disk, the migration fails to import the smaller disk during the final disk import stage. The error from the cdi importer pod is
virtual image size 10737418240 is larger than the reported available storage 9428545536. A larger PVC is required. Unable to resize disk image to requested size
.I have filesystem overhead set to 15% for this storage class, and it is being respected by CDI - the DataVolume is created with size = 10GiB, and the corresponding PVC is 12048MiB.
Despite the ~2GiB filesystem overhead, the reported available storage is smaller than the disk being imported.
When migrating the same VM using a 25% overhead, the import succeeds.
What you expected to happen:
15% filesystem overhead should be sufficient to import the 10GiB disk. More broadly, a fixed % overhead should work for all sizes of disk.
How to reproduce it (as minimally and precisely as possible):
DataVolume Spec:
Prime PVC for this DV:
StorageClass:
Additional context:
I understand that a certain % overhead is needed to import when using Filesystem volumeMode, but I've observed that the % needs to be increased as virtual disk size increases. My understanding is that, a given % should be sufficient across all disk sizes, since the actual overhead bytes would increase proportional to the virtual disk.
Some other examples we've observed:
Perhaps I am missing something, why should % overhead have to increase as disk size increases. Shouldnt the relative nature of a percentage account for this? Any guidance about how these % are used, especially in the context of multistage imports is greatly appreciated 🙏
Environment:
kubectl get deployments cdi-deployment -o yaml
): 1.58.0kubectl version
): 1.29.7uname -a
): N/AThe text was updated successfully, but these errors were encountered: