-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MTV-1537 | Optimalise the plan scheduler #1088
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #1088 +/- ##
==========================================
+ Coverage 16.26% 16.30% +0.03%
==========================================
Files 112 112
Lines 19794 19802 +8
==========================================
+ Hits 3220 3229 +9
+ Misses 16289 16286 -3
- Partials 285 287 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Issue: When we start the warm migration with VM which has a lot of disks we wait for the whole VM to get migrated. We do not ignore the disks that has been already migrated. This can cause that when we have 2 VMs with 10 disks each and on each there is one larger disk the whole scheduler will be halted untill they finish. So even if the left 9 disks will be done no migration will be started. Fix: Subtract the finished disks from the disk count. Fixes: https://issues.redhat.com/browse/MTV-1537 Signed-off-by: Martin Necas <[email protected]>
Quality Gate passedIssues Measures |
Issue: When we start the warm migration with VM which has a lot of disks we wait for the whole VM to get migrated. We do not ignore the disks that has been already migrated. This can cause that when we have 2 VMs with 10 disks each and on each there is one larger disk the whole scheduler will be halted untill they finish. So even if the left 9 disks will be done no migration will be started. Fix: Subtract the finished disks from the disk count. Fixes: https://issues.redhat.com/browse/MTV-1537 Signed-off-by: Martin Necas <[email protected]>
Issue: When we start the warm migration with VM which has a lot of disks we wait for the whole VM to get migrated. We do not ignore the disks that has been already migrated. This can cause that when we have 2 VMs with 10 disks each and on each there is one larger disk the whole scheduler will be halted untill they finish. So even if the left 9 disks will be done no migration will be started. Fix: Subtract the finished disks from the disk count. Fixes: https://issues.redhat.com/browse/MTV-1537 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issues: [1] Allow migration of "unknow" guests Right now when we want to migrate an unknown and unsupported operating system which is unsupported by the virt-v2v [3]. [2] Unifying the process and potential speedup Right now we are using two different methods for the disk transfer. This brings additional engineering for maintaining two paths. It's harder to debug two different flows. The virt-v2v transfers the disks in the sequence whereas using the CDI we can start multiple disk imports in parallel. This can improve the migration speeds. Fix: MTV is already using the CNV CDI for the warm and remote migration. We just need to adjust the code to remove the virt-v2v transfer and rely on the CNV CDI to do it for us. Drawbacks: - CNV CDI *requires* the VDDK, which was till now highly recommended. - CNV CDI is not maintained inside the MTV and there might be problems escalating and backporting the patches as CNV has a different release cycle. - Because we will be migrating all disks in parallel we need to optimise our migration scheduler as we don't want to take too much of the hosts/network resources. I have already done some optimisations in [4,5,6]. Notes: This change removes the usage of virt-v2v and we will only use the virt-v2v-in-place. Ref: [1] https://issues.redhat.com/browse/MTV-1536 [2] https://issues.redhat.com/browse/MTV-1581 [3] https://access.redhat.com/articles/1351473 [4] kubev2v#1088 [5] kubev2v#1087 [6] kubev2v#1086 Signed-off-by: Martin Necas <[email protected]>
Issue:
When we start the warm migration with VM which has a lot of disks we wait for the whole VM to get migrated. We do not ignore the disks that have already been migrated. This can cause that when we have 2 VMs with 10 disks each and on each there is one larger disk the whole scheduler will be halted until they finish. So even if the left 9 disks are done no migration will be started.
Fix:
Subtract the finished disks from the disk count.
Fixes: https://issues.redhat.com/browse/MTV-1537