-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reorder warm migration flow for vSphere VMs #1184
Conversation
2c2d163
to
0bd661a
Compare
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1184 +/- ##
==========================================
- Coverage 15.66% 15.66% -0.01%
==========================================
Files 112 112
Lines 23052 23057 +5
==========================================
Hits 3612 3612
- Misses 19155 19160 +5
Partials 285 285
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
cc3ae35
to
4a321d5
Compare
When performing a warm migration of a vSphere VM, the current version of Forklift creates the snapshot for each precopy, gets the change ID for the previous snapshot, and then deletes the previous snapshot. Per the VMware documentation (see MTV-1656), the correct flow is to record the change IDs for the old snapshot, delete it, and then create the new snapshot. This patch implements the correct flow by adding vSphere-specific steps to the warm migration itinerary to ensure that old snapshots are cleaned up and change IDs are recorded before creating new snapshots. Due to the change in flow it's now necessary to store the change IDs somewhere between itinerary phases, so a list of disk/changeid pairs has been added to the Precopy struct. This change requires the CRs to be regenerated. Correcting the warm migration flow appears, at least in some cases, to address the filesystem corruption documented in https://issues.redhat.com/browse/MTV-1656. Signed-off-by: Sam Lucidi <[email protected]>
Signed-off-by: Sam Lucidi <[email protected]>
Signed-off-by: Sam Lucidi <[email protected]>
Signed-off-by: Sam Lucidi <[email protected]>
3f3d1a8
to
b10d041
Compare
Quality Gate passedIssues Measures |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested it in my env and the warm migration passed and I could boot the VM without a problem.
LGTM!
When performing a warm migration of a vSphere VM, the current version of Forklift creates the snapshot for each precopy, gets the change ID for the previous snapshot, and then deletes the previous snapshot. Per the VMware documentation, the correct flow is to record the change IDs for the old snapshot, delete it, and then create the new snapshot.
This PR implements the correct flow by adding vSphere-specific steps to the warm migration itinerary to ensure that old snapshots are cleaned up and change IDs are recorded before creating new snapshots. Due to the change in flow it's now necessary to store the change IDs somewhere between itinerary phases, so a list of disk/changeid pairs has been added to the
Precopy
struct. This change required the CRs to be regenerated.Correcting the warm migration flow appears, at least in some cases, to address the filesystem corruption documented in https://issues.redhat.com/browse/MTV-1656.
This PR also removes some dead code branches left over from a very old version of Forklift. The
VSphereIncrementalBackup
feature flag is always assumed to be on as this is necessary for the proper warm migration flow, so the alternate code path no longer needs to exist. See https://issues.redhat.com/browse/MTV-1665