-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: let the user choose the boot disk for multiboot #873
Conversation
@bennyz the linking part is still missing, I'm not sure what would be the best way to do it.... |
As discussed shouldn't be an issue, we only care about passing the device representing the selected disk to |
Signed-off-by: Bella Khizgiyaev <[email protected]>
Quality Gate passedIssues Measures |
@@ -30,6 +30,8 @@ type VM struct { | |||
ref.Ref `json:",inline"` | |||
// Enable hooks. | |||
Hooks []HookRef `json:"hooks,omitempty"` | |||
// Choose the bootable disk number for praimery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
primary? we should also say that this field is optional and what happens when it's not specified
// Choose the bootable disk number for praimery | |
description: Choose the primary disk the VM boots from |
@@ -30,6 +30,8 @@ type VM struct { | |||
ref.Ref `json:",inline"` | |||
// Enable hooks. | |||
Hooks []HookRef `json:"hooks,omitempty"` | |||
// Choose the bootable disk number for praimery | |||
RootDisk string `json:"rootDisk,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's an interesting decision to place this field at the vm level
on the one hand, we do the same for hooks but as it goes against out desire to have the UI designed for scale migrations, the UI allows users to specify a hook per plan and then sets it per each vm in the plan
on the other hand, we considered this for encryption keys and decided to place it at the plan-level
we better be consistent here
I don't mind going with this approach of specifying the primary disk per-VM but then we should also do that for the encryption keys and implement this field at the UI level as hooks
@liranr23 WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahadas I do think its better to keep it in the VM level (for all cases) its giving more flexibility when setting up planes for case we want to have more that one VM like this in a plan, otherwise the user will need to have separate plans for each VM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do have something different.
here, we need to provide per VM the boot device.
in LUKS, we also need the pass phrase per device, but since the device id is unique, we can put all the ids and their pass phrases to every virt-v2v command. it will just ignore the non-existing devices. therefore, we can ease on the process, and generalize it to the plan (make it more mass migration state of mind).
i do think the boot disk makes more sense to be per-vm, but i don't think we need to change the LUKs approach, especially we will have the all
devices option from virt-v2v.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
otherwise the user will need to have separate plans for each VM.
I don't think it's a common case - usually VMs are provisioned from templates so there would be a group of VMs with a certain disk that should be considered as primary, and we can group those VMs in a single plan (assuming they are all migrated to the same namespace, otherwise we anyway need different plans)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
var rootDisk string | ||
for diskNum, disk := range vmInventory.Disks { | ||
if disk.File == vm.RootDisk { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liranr23 do you remember if the order of the disk here is consistent? I remember we talked about soothing similar when working on the OVA feature, but the question if we can rely on the disk order inside the VM or we should sort it beforehand?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wouldn't count on it to be consistent. it depends on the inventory which depends on the way we get it from the source provider and that depends on the API itself. there are many places that may change.
however, we do use the VM CR in podVolumeMounts
function above, there it should be consistent, if you find it helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liranr23 cool thanks, will look into that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wouldn't count on it to be consistent.
if the disk order is not constant, a vm may have some disk as 'sda' on one boot, and then it will be 'sdc' on a second boot ? can this work, what am i missing ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yaacov do you mean on the vm itself? if the vm itself is already posted nothing is gonna change there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean on the vm itself?
Thanks, I talked with @bkhizgiy and the conclusion was that the UI will set a number "1", "2" and mtv will do the logic to make sure "1" will map to the first disk "2" to the second, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason it's PR straight to release-v2.6.0
branch and not main (in any case it suppose to be 2.6.1 for backport)?
@@ -1584,6 +1585,29 @@ func (r *KubeVirt) guestConversionPod(vm *plan.VMStatus, vmVolumes []cnv.Volume, | |||
if err != nil { | |||
return | |||
} | |||
if vm.RootDisk != "" { | |||
vmInventory := &model.VM{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we might query the inventory also in other functions (like the PodEnvironment
), maybe we can query it once at the beginning and send it to the function instead querying it multiple times?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it's not an offical PR, just a draft we worked on last week, wanted to test it on the the customer version and share it with Benny, but will open this on the main once its ready.
var rootDisk string | ||
for diskNum, disk := range vmInventory.Disks { | ||
if disk.File == vm.RootDisk { | ||
rootDisk = fmt.Sprintf("/dev/sd%c", 'a'+(diskNum)%26) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consts?
also, i saw germano's comment on the other PR (#872) - he is right regarding the need to have more than 26 disks.
in that case maybe we can put the path generation into a single place and re-use it in both?
closing as the changes need to be backported to the release-v2.6.3 branch at this point |
No description provided.