-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backport for 2.7.4 #1186
Merged
Merged
Backport for 2.7.4 #1186
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The importer pod can get stuck creating if there are multiple networks and the wrong default network override annotation is configured. This changes the annotation as requested in [MTV-1645](https://issues.redhat.com/browse/MTV-1645). I reproduced the bug without the change, and managed a successful transfer with the change applied. --------- Signed-off-by: Matthew Arnold <[email protected]>
When performing a warm migration of a vSphere VM, the current version of Forklift creates the snapshot for each precopy, gets the change ID for the previous snapshot, and then deletes the previous snapshot. Per the VMware documentation (see MTV-1656), the correct flow is to record the change IDs for the old snapshot, delete it, and then create the new snapshot. This patch implements the correct flow by adding vSphere-specific steps to the warm migration itinerary to ensure that old snapshots are cleaned up and change IDs are recorded before creating new snapshots. Due to the change in flow it's now necessary to store the change IDs somewhere between itinerary phases, so a list of disk/changeid pairs has been added to the Precopy struct. This change requires the CRs to be regenerated. Correcting the warm migration flow appears, at least in some cases, to address the filesystem corruption documented in https://issues.redhat.com/browse/MTV-1656. Signed-off-by: Sam Lucidi <[email protected]>
Signed-off-by: Sam Lucidi <[email protected]>
Signed-off-by: Sam Lucidi <[email protected]>
Signed-off-by: Sam Lucidi <[email protected]>
Issue: When monitring the migration process the forklift requests the PVC from DV ClaimName. On top of that the forklift checks the Annotation "cdi.kubevirt.io/storage.import.importPodName". But this annotation is missing in the main PVC but is present in the prime pvc, which format is "prime-$UID_MAINPVC". This annotation points to the created pod which imports the volume. Fix: Search the prime PVC instead of the main PVC, so the forklift is able to check the importer pod status. Signed-off-by: Martin Necas <[email protected]>
Quality Gate passedIssues Measures |
I have by accident included |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport of: