Skip to content

Commit

Permalink
MTV-1537 | Optimalise the plan scheduler (kubev2v#1088)
Browse files Browse the repository at this point in the history
Issue: When we start the warm migration with VM which has a lot of disks
we wait for the whole VM to get migrated. We do not ignore the disks
that has been already migrated. This can cause that when we have 2 VMs
with 10 disks each and on each there is one larger disk the whole
scheduler will be halted untill they finish. So even if the left 9 disks
will be done no migration will be started.

Fix: Subtract the finished disks from the disk count.

Fixes: https://issues.redhat.com/browse/MTV-1537

Signed-off-by: Martin Necas <[email protected]>
  • Loading branch information
mnecas committed Oct 8, 2024
1 parent 7abe181 commit 21050cf
Showing 1 changed file with 17 additions and 1 deletion.
18 changes: 17 additions & 1 deletion pkg/controller/plan/scheduler/vsphere/scheduler.go
Original file line number Diff line number Diff line change
Expand Up @@ -216,11 +216,27 @@ func (r *Scheduler) cost(vm *model.VM, vmStatus *plan.VMStatus) int {
return 0
default:
// CDI transfers the disks in parallel by different pods
return len(vm.Disks)
return len(vm.Disks) - r.finishedDisks(vmStatus)
}
}
}

// finishedDisks returns a number of the disks that have completed the disk transfer
// This can reduce the migration time as VMs with one large disks and many small disks won't halt the scheduler
func (r *Scheduler) finishedDisks(vmStatus *plan.VMStatus) int {
var resp = 0
for _, step := range vmStatus.Pipeline {
if step.Name == "DiskTransfer" {
for _, task := range step.Tasks {
if task.Phase == "Completed" {
resp += 1
}
}
}
}
return resp
}

// Return a map of all the VMs that could be scheduled
// based on the available host capacities.
func (r *Scheduler) schedulable() (schedulable map[string][]*pendingVM) {
Expand Down

0 comments on commit 21050cf

Please sign in to comment.