Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: let the user choose the boot disk for multiboot #873

Closed
wants to merge 1 commit into from

Conversation

bkhizgiy
Copy link
Member

No description provided.

@bkhizgiy
Copy link
Member Author

@bennyz the linking part is still missing, I'm not sure what would be the best way to do it....

@bennyz
Copy link
Member

bennyz commented Apr 24, 2024

@bennyz the linking part is still missing, I'm not sure what would be the best way to do it....

As discussed shouldn't be an issue, we only care about passing the device representing the selected disk to --root

Copy link

sonarcloud bot commented Apr 25, 2024

Quality Gate Passed Quality Gate passed

Issues
1 New issue
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

@@ -30,6 +30,8 @@ type VM struct {
ref.Ref `json:",inline"`
// Enable hooks.
Hooks []HookRef `json:"hooks,omitempty"`
// Choose the bootable disk number for praimery
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

primary? we should also say that this field is optional and what happens when it's not specified

Suggested change
// Choose the bootable disk number for praimery
description: Choose the primary disk the VM boots from

@@ -30,6 +30,8 @@ type VM struct {
ref.Ref `json:",inline"`
// Enable hooks.
Hooks []HookRef `json:"hooks,omitempty"`
// Choose the bootable disk number for praimery
RootDisk string `json:"rootDisk,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's an interesting decision to place this field at the vm level
on the one hand, we do the same for hooks but as it goes against out desire to have the UI designed for scale migrations, the UI allows users to specify a hook per plan and then sets it per each vm in the plan
on the other hand, we considered this for encryption keys and decided to place it at the plan-level

we better be consistent here
I don't mind going with this approach of specifying the primary disk per-VM but then we should also do that for the encryption keys and implement this field at the UI level as hooks
@liranr23 WDYT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahadas I do think its better to keep it in the VM level (for all cases) its giving more flexibility when setting up planes for case we want to have more that one VM like this in a plan, otherwise the user will need to have separate plans for each VM.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do have something different.
here, we need to provide per VM the boot device.
in LUKS, we also need the pass phrase per device, but since the device id is unique, we can put all the ids and their pass phrases to every virt-v2v command. it will just ignore the non-existing devices. therefore, we can ease on the process, and generalize it to the plan (make it more mass migration state of mind).

i do think the boot disk makes more sense to be per-vm, but i don't think we need to change the LUKs approach, especially we will have the all devices option from virt-v2v.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise the user will need to have separate plans for each VM.

I don't think it's a common case - usually VMs are provisioned from templates so there would be a group of VMs with a certain disk that should be considered as primary, and we can group those VMs in a single plan (assuming they are all migrated to the same namespace, otherwise we anyway need different plans)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liranr23 ok, so I commented on the encryption keys part in #676 - we can continue that there
so if we're in agreement that it's better to place the primary disk configurations like hooks, let's go with that then


var rootDisk string
for diskNum, disk := range vmInventory.Disks {
if disk.File == vm.RootDisk {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liranr23 do you remember if the order of the disk here is consistent? I remember we talked about soothing similar when working on the OVA feature, but the question if we can rely on the disk order inside the VM or we should sort it beforehand?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wouldn't count on it to be consistent. it depends on the inventory which depends on the way we get it from the source provider and that depends on the API itself. there are many places that may change.
however, we do use the VM CR in podVolumeMounts function above, there it should be consistent, if you find it helpful.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liranr23 cool thanks, will look into that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i wouldn't count on it to be consistent.

if the disk order is not constant, a vm may have some disk as 'sda' on one boot, and then it will be 'sdc' on a second boot ? can this work, what am i missing ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yaacov do you mean on the vm itself? if the vm itself is already posted nothing is gonna change there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean on the vm itself?

Thanks, I talked with @bkhizgiy and the conclusion was that the UI will set a number "1", "2" and mtv will do the logic to make sure "1" will map to the first disk "2" to the second, etc.

Copy link
Member

@liranr23 liranr23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any reason it's PR straight to release-v2.6.0 branch and not main (in any case it suppose to be 2.6.1 for backport)?

@@ -1584,6 +1585,29 @@ func (r *KubeVirt) guestConversionPod(vm *plan.VMStatus, vmVolumes []cnv.Volume,
if err != nil {
return
}
if vm.RootDisk != "" {
vmInventory := &model.VM{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we might query the inventory also in other functions (like the PodEnvironment), maybe we can query it once at the beginning and send it to the function instead querying it multiple times?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it's not an offical PR, just a draft we worked on last week, wanted to test it on the the customer version and share it with Benny, but will open this on the main once its ready.

var rootDisk string
for diskNum, disk := range vmInventory.Disks {
if disk.File == vm.RootDisk {
rootDisk = fmt.Sprintf("/dev/sd%c", 'a'+(diskNum)%26)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consts?
also, i saw germano's comment on the other PR (#872) - he is right regarding the need to have more than 26 disks.
in that case maybe we can put the path generation into a single place and re-use it in both?

@ahadas ahadas changed the title WIP: let the user chose the boot disk for multiboot WIP: let the user choose the boot disk for multiboot Jun 2, 2024
@ahadas
Copy link
Member

ahadas commented Jun 19, 2024

closing as the changes need to be backported to the release-v2.6.3 branch at this point

@ahadas ahadas closed this Jun 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants