-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CNV-52722: Pass through extra VDDK configuration options to importer pod. #3572
base: main
Are you sure you want to change the base?
Conversation
The VDDK library itself accepts infrequently-used arguments in a configuration file, and some of these arguments have been tested to show a significant transfer speedup in some environments. This adds an annotation that references a ConfigMap holding the contents of this VDDK configuration file, and mounts it to the importer pod. The first file in the mounted directory is passed to the VDDK. Signed-off-by: Matthew Arnold <[email protected]>
Signed-off-by: Matthew Arnold <[email protected]>
Signed-off-by: Matthew Arnold <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mnecas Any concerns about the need to copy the ConfigMap to the importer namespace? I wasn't sure if that would make things awkward to use from the forklift side. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally LGTM, I have added some a note and NP but nothing on my side
withHidden, err := os.ReadDir(common.VddkArgsDir) | ||
if err != nil { | ||
if os.IsNotExist(err) { | ||
return "", nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NP: Please add a comment that the user did not specify the vddk additional config
@@ -228,6 +236,30 @@ func getVddkPluginPath() NbdkitPlugin { | |||
return NbdkitVddkPlugin | |||
} | |||
|
|||
// Extra VDDK configuration options are stored in a ConfigMap mounted to the | |||
// importer pod. Just look for the first file in the mounted directory, and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just look for the first file
just note we need to be sure to document this so user does not chain the configs to separate configmaps.
@mnecas: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I think it should be okay, but also think we should document/verify. |
Signed-off-by: Matthew Arnold <[email protected]>
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
@@ -1254,6 +1276,30 @@ var _ = Describe("[vendor:[email protected]][level:component]DataVolume tests", | |||
Message: "Import Complete", | |||
Reason: "Completed", | |||
}}), | |||
Entry("[test_id:5083]succeed importing VDDK data volume with extra arguments ConfigMap set", dataVolumeTestArguments{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to assert something in the final step of this test? like checking the config was applied?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this test I wanted to make sure that the contents of the ConfigMap are present in the file, so the assertion happens in the vddk-test-plugin (the fgets/strcmp). Is there a better way to check the result from the importer? Like can this test read the pod logs or termination message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh okay I see
if (strcmp(extras, "VixDiskLib.nfcAio.Session.BufSizeIn64KB=16") != 0) { // Must match datavolume_test |
You can either read pod logs or make this a part of the termination message struct
containerized-data-importer/pkg/importer/vddk-datasource_amd64.go
Lines 1011 to 1019 in 625a9e9
// GetTerminationMessage returns data to be serialized and used as the termination message of the importer. | |
func (vs *VDDKDataSource) GetTerminationMessage() *common.TerminationMessage { | |
return &common.TerminationMessage{ | |
VddkInfo: &common.VddkInfo{ | |
Version: vddkVersion, | |
Host: vddkHost, | |
}, | |
} | |
} |
@@ -32,6 +33,26 @@ int fakevddk_config(const char *key, const char *value) { | |||
if (strcmp(key, "snapshot") == 0) { | |||
expected_arg_count = 9; // Expect one for 'snapshot' and one for 'transports' | |||
} | |||
if (strcmp(key, "config") == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there plans to backport this? just making sure I have a maintenance path crafted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I asked around and I guess the customer looking for this is currently on 4.15. So we will definitely be asking for another barrage of backports.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the heads up. Is there any way to avoid the test plugin change? I guess not. I am just asking since it could spare making new test images for all releases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't realize this was an issue, the test is definitely not set up to help avoid generating images. I'm not sure how else to do it though, short of hiding the plugin in the importer image or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine
return "", err | ||
} | ||
files := []fs.DirEntry{} | ||
for _, file := range withHidden { // Ignore hidden files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, how come mounting a config map generates hidden files?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The volume mount sets up the directory like this:
drwxr-sr-x. 2 root 107 4096 Dec 18 15:55 ..2024_12_18_15_55_18.1193095340
lrwxrwxrwx. 1 root 107 32 Dec 18 15:55 ..data -> ..2024_12_18_15_55_18.1193095340
lrwxrwxrwx. 1 root 107 18 Dec 18 15:55 vddk-key -> ..data/vddk-key
I don't really know why, I didn't look into the implementation details. The documented way to use it is to open the file with the name of the key from the ConfigMap, so I was just skipping the entries with leading dots.
I guess it would be nicer to pass down the name of the key somehow, but what I was really trying to do was get the ConfigMap contents passed down in a way that CDI doesn't have to parse any of it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could do something like this
containerized-data-importer/pkg/operator/resources/namespaced/controller.go
Lines 342 to 355 in 625a9e9
VolumeSource: corev1.VolumeSource{ | |
ConfigMap: &corev1.ConfigMapVolumeSource{ | |
LocalObjectReference: corev1.LocalObjectReference{ | |
Name: "cdi-uploadserver-signer-bundle", | |
}, | |
Items: []corev1.KeyToPath{ | |
{ | |
Key: "ca-bundle.crt", | |
Path: "ca-bundle.crt", | |
}, | |
}, | |
DefaultMode: &defaultMode, | |
}, | |
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be nice, but I was trying to not force a specific key name. With KeyToPath I would need to open the ConfigMap and look for the first key. Maybe it makes sense to just require a specific key name and avoid this issue entirely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah sounds cleaner.. whatever you prefer
tests/datavolume_test.go
Outdated
if dv.Annotations == nil { | ||
dv.Annotations = make(map[string]string) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could use controller.AddAnnotation() here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Much tidier, thanks. Done.
@@ -349,6 +349,13 @@ spec: | |||
[Get VDDK ConfigMap example](../manifests/example/vddk-configmap.yaml) | |||
[Ways to find thumbprint](https://libguestfs.org/nbdkit-vddk-plugin.1.html#THUMBPRINTS) | |||
|
|||
#### Extra VDDK Configuration Options | |||
|
|||
The VDDK library itself looks in a configuration file (such as `/etc/vmware/config`) for extra options to fine tune data transfers. To pass these options through to the VDDK, store the configuration file contents in a ConfigMap and add a `cdi.kubevirt.io/storage.pod.vddk.extraargs` annotation to the DataVolume specification. The ConfigMap will be mounted to the importer pod as a volume, and the first file in the mounted directory will be passed to the VDDK. This means that the ConfigMap must be placed in the same namespace as the DataVolume, and the ConfigMap should only have one file entry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you considered making this an API on the DataVolume, but, since you need a backport, you prefer the annotation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it didn't seem worth changing the CRDs and all the generated stuff for just for this uncommon fine-tuning configuration option. I can certainly change the API if that would be better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mhenriks wdyt? since this is backporting I am leaning to the annotation as well, but I am not sure.. usually any annotation becomes an API that we forget about
Issue: The scale and perf team found a way how to improve the transfer speeds. Right now the only way to enable this feature is to set the v2v extra vars. The v2v extra vars pass the configuration to the virt-v2v and virt-v2v-in-place. The v2v extra vars configuration is general and not specific for VDDK. This causes the warm migration which uses the virt-v2v-in-place to fail as it does not use any VDDK parameters. Those parameters should be passed to the CNV CDI instead. Fix: Add a way to easily enable and configure the AIO. This feature is VDDK and provider-specific as it requires to have specific vSphere and VDDK versions. So we can't enable this feature globally nor by default. So this PR adds the configuration to the Provider spec settings and create a configmap with the necessary configuration and either mounts the configmap to the guest conversion pod for cold migration or passes the configmap name to the CDI DV annotation. Example: ``` apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: vsphere namespace: forklift spec: settings: sdkEndpoint: vcenter useVddkAioOptimization: 'true' vddkAioBufSize: 16 // optional defaults to 16 vddkAioBufCount: 4 // optional defaults to 4 vddkInitImage: 'quay.io/xiaodwan/vddk:8' type: vsphere ``` Ref: - https://issues.redhat.com/browse/MTV-1804 - kubevirt/containerized-data-importer#3572 - https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index#mtv-aio-buffer_mtv Signed-off-by: Martin Necas <[email protected]>
Issue: The scale and perf team found a way how to improve the transfer speeds. Right now the only way to enable this feature is to set the v2v extra vars. The v2v extra vars pass the configuration to the virt-v2v and virt-v2v-in-place. The v2v extra vars configuration is general and not specific for VDDK. This causes the warm migration which uses the virt-v2v-in-place to fail as it does not use any VDDK parameters. Those parameters should be passed to the CNV CDI instead. Fix: Add a way to easily enable and configure the AIO. This feature is VDDK and provider-specific as it requires to have specific vSphere and VDDK versions. So we can't enable this feature globally nor by default. So this PR adds the configuration to the Provider spec settings and create a configmap with the necessary configuration and either mounts the configmap to the guest conversion pod for cold migration or passes the configmap name to the CDI DV annotation. Example: ``` apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: vsphere namespace: forklift spec: settings: sdkEndpoint: vcenter useVddkAioOptimization: 'true' vddkAioBufSize: 16 // optional defaults to 16 vddkAioBufCount: 4 // optional defaults to 4 vddkInitImage: 'quay.io/xiaodwan/vddk:8' type: vsphere ``` Ref: - https://issues.redhat.com/browse/MTV-1804 - kubevirt/containerized-data-importer#3572 - https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index#mtv-aio-buffer_mtv Signed-off-by: Martin Necas <[email protected]>
Signed-off-by: Matthew Arnold <[email protected]>
/retest |
Issue: The scale and perf team found a way how to improve the transfer speeds. Right now the only way to enable this feature is to set the v2v extra vars. The v2v extra vars pass the configuration to the virt-v2v and virt-v2v-in-place. The v2v extra vars configuration is general and not specific for VDDK. This causes the warm migration which uses the virt-v2v-in-place to fail as it does not use any VDDK parameters. Those parameters should be passed to the CNV CDI instead. Fix: Add a way to easily enable and configure the AIO. This feature is VDDK and provider-specific as it requires to have specific vSphere and VDDK versions. So we can't enable this feature globally nor by default. So this PR adds the configuration to the Provider spec settings and create a configmap with the necessary configuration and either mounts the configmap to the guest conversion pod for cold migration or passes the configmap name to the CDI DV annotation. Example: ``` apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: vsphere namespace: forklift spec: settings: sdkEndpoint: vcenter useVddkAioOptimization: 'true' vddkAioBufSize: 16 // optional defaults to 16 vddkAioBufCount: 4 // optional defaults to 4 vddkInitImage: 'quay.io/xiaodwan/vddk:8' type: vsphere ``` Ref: - https://issues.redhat.com/browse/MTV-1804 - kubevirt/containerized-data-importer#3572 - https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index#mtv-aio-buffer_mtv Signed-off-by: Martin Necas <[email protected]>
Issue: The scale and perf team found a way how to improve the transfer speeds. Right now the only way to enable this feature is to set the v2v extra vars. The v2v extra vars pass the configuration to the virt-v2v and virt-v2v-in-place. The v2v extra vars configuration is general and not specific for VDDK. This causes the warm migration which uses the virt-v2v-in-place to fail as it does not use any VDDK parameters. Those parameters should be passed to the CNV CDI instead. Fix: Add a way to easily enable and configure the AIO. This feature is VDDK and provider-specific as it requires to have specific vSphere and VDDK versions. So we can't enable this feature globally nor by default. So this PR adds the configuration to the Provider spec settings and create a configmap with the necessary configuration and either mounts the configmap to the guest conversion pod for cold migration or passes the configmap name to the CDI DV annotation. Example: ``` apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: vsphere namespace: forklift spec: settings: sdkEndpoint: vcenter useVddkAioOptimization: 'true' vddkAioBufSize: 16 // optional defaults to 16 vddkAioBufCount: 4 // optional defaults to 4 vddkInitImage: 'quay.io/xiaodwan/vddk:8' type: vsphere ``` Ref: - https://issues.redhat.com/browse/MTV-1804 - kubevirt/containerized-data-importer#3572 - https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index#mtv-aio-buffer_mtv Signed-off-by: Martin Necas <[email protected]>
Issue: The scale and perf team found a way how to improve the transfer speeds. Right now the only way to enable this feature is to set the v2v extra vars. The v2v extra vars pass the configuration to the virt-v2v and virt-v2v-in-place. The v2v extra vars configuration is general and not specific for VDDK. This causes the warm migration which uses the virt-v2v-in-place to fail as it does not use any VDDK parameters. Those parameters should be passed to the CNV CDI instead. Fix: Add a way to easily enable and configure the AIO. This feature is VDDK and provider-specific as it requires to have specific vSphere and VDDK versions. So we can't enable this feature globally nor by default. So this PR adds the configuration to the Provider spec settings and create a configmap with the necessary configuration and either mounts the configmap to the guest conversion pod for cold migration or passes the configmap name to the CDI DV annotation. Example: ``` apiVersion: forklift.konveyor.io/v1beta1 kind: Provider metadata: name: vsphere namespace: forklift spec: settings: sdkEndpoint: vcenter useVddkAioOptimization: 'true' vddkAioBufSize: 16 // optional defaults to 16 vddkAioBufCount: 4 // optional defaults to 4 vddkInitImage: 'quay.io/xiaodwan/vddk:8' type: vsphere ``` Ref: - https://issues.redhat.com/browse/MTV-1804 - kubevirt/containerized-data-importer#3572 - https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html-single/installing_and_using_the_migration_toolkit_for_virtualization/index#mtv-aio-buffer_mtv Signed-off-by: Martin Necas <[email protected]>
What this PR does / why we need it:
This pull request adds a new annotation "cdi.kubevirt.io/storage.pod.vddk.extraargs", referencing a ConfigMap that contains extra parameters to pass directly to the VDDK library. The use case is to allow tuning of asynchronous buffer counts for MTV as requested in CNV-52722. Testing has shown good results for cold migrations with:
These parameters are stored in a file whose path is passed to the VDDK via the nbdkit "config=" option. The file contents come from the referenced ConfigMap, and the ConfigMap is mounted to the importer pod as a volume.
Which issue(s) this PR fixes:
Fixes CNV-52722
Special notes for your reviewer:
As far as I can tell, a ConfigMap volume mount must be in the same namespace as the importer pod. So MTV will need to create or duplicate the ConfigMap to the same namespace as the DataVolume it creates.
Release note: