Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Fix to allow users to configure disk size for finch space #715

Closed
wants to merge 3 commits into from
Closed

fix: Fix to allow users to configure disk size for finch space #715

wants to merge 3 commits into from

Conversation

haytok
Copy link
Member

@haytok haytok commented Dec 4, 2023

In the current implementation, the disk space allocated for finch is hard-coded with a fixed value of 50G. Therefore, when users pull large docker images and use more than 50G of the allocated finch space, "no space left on device" error may occur.

As mentioned above, the disk size available for finch is fixed at 50GB and cannot be set by users. To work around this setting, they need to rewrite the hard-coded parts and use their own build of finch, but this is very time-consuming.

Alternatively, they can use the following method to change the disk size.

Thus, although there is some workarounds to adjust the disk size for finch, users will be able to use finch more flexibly if they are able to specify the disk size that finch can use at the time of vm init.

Therefore, this fix improves usability by allowing users to run "finch vm init" and flexibly set the size of the disk created for finch based on the finchDiskSize in ~/.finch/finch.yaml.

Note that for finch at present, the size of finchDiskSize becomes the disk size of the finch area only when the vm is initialized ("finch vm init").

Issue #, if available: No

Description of changes: Detail are described in this commit message.

Testing done: Yes

  • I've reviewed the guidance in CONTRIBUTING.md

License Acceptance

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

In the current implementation, the disk space allocated for finch is
hard-coded with a fixed value of 50G. Therefore, when users pull large
docker images and use more than 50G of the allocated finch space,
"no space left on device" error may occur.

As mentioned above, the disk size available for finch is fixed at 50GB and
cannot be set by users. To work around this setting, they need to rewrite
the hard-coded parts and use their own build of finch, but this is very
time-consuming.

Alternatively, they can use the following method to change the disk size.

- #275 (comment)

Thus, although there is some workarounds to adjust the disk size for
finch, users will be able to use finch more flexibly if they are able to
specify the disk size that finch can use at the time of vm init.

Therefore, this fix improves usability by allowing users to run "finch vm
init" and flexibly set the size of the disk created for finch based on the
finchDiskSize in ~/.finch/finch.yaml.

Note that for finch at present, the size of finchDiskSize becomes the disk
size of the finch area only when the vm is initialized ("finch vm init").

Signed-off-by: Hayato Kiwata <[email protected]>
Copy link
Contributor

@vsiravar vsiravar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, could you help add a test an e2e test to validate that the disk size from finch.yaml is applied correctly in the vm e2e test suite.

@weikequ weikequ requested a review from vsiravar December 6, 2023 19:32
@haytok
Copy link
Member Author

haytok commented Dec 7, 2023

Hi, @vsiravar

Thanks for comments !!!
I have added e2e test for this fix, so could you please review when you have time ?


ginkgo.It("finch vm init by the disk size for finch area configured in finchDiskSize of ~/.finch.yaml", func() {
resetVM(o, installed)
resetDisks(o, installed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will users lose their persistent data if they have to resize their disk?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the resolution for this? Can this be run without resetDisks with a test that checks the containers prior to resize are available after resize. As mentioned in the ticket ".After I update the value, stop the VM, and then start the VM, the value should take effect."

Copy link
Contributor

@vsiravar vsiravar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Looks like the golang-ci-lint check failed. To run it locally see CONTRIBUTING.md.

The following parts have been corrected in accordance with the review.

  - Fix text for ginkgo.It blocks in e2e/vm/additional_disk_test.go
  - Apply golangci-lint
  - Add validation finchDiskSize from finch.yaml

Signed-off-by: Hayato Kiwata <[email protected]>
@haytok
Copy link
Member Author

haytok commented Dec 14, 2023

Hi, @vsiravar

Thanks for comments !!!
The following parts have been corrected in accordance with the review.

  • Fix text for ginkgo.It blocks in e2e/vm/additional_disk_test.go
  • Apply golangci-lint
  • Add validation finchDiskSize from finch.yaml

Could you review this fix when you have time ?

@haytok haytok requested a review from vsiravar December 14, 2023 17:13
@vsiravar
Copy link
Contributor

Thanks @haytok , the changes look good. Just had one clarifying question that I put in the comment.

@haytok
Copy link
Member Author

haytok commented Dec 18, 2023

@vsiravar
Thanks for comments. I have added comments, so could you check when you have time?

@haytok
Copy link
Member Author

haytok commented Dec 21, 2023

Hi, Team

My response to this comments seem invisible, so add my comments.


will users lose their persistent data if they have to resize their disk?

Yes, users will lose their persistent data when they have to resize their disk using finchDiskSize.

In the current implementation, finchDiskSize will not take effect unless users delete the persistent disk (~/.finch/.disks) and execute finch vm init.

What is the resolution for this? Can this be run without resetDisks with a test that checks the containers prior to resize are available after resize. As mentioned in the ticket ".After I update the value, stop the VM, and then start the VM, the value should take effect."

No, the current implementation requires that there be no persistent disks in order to test whether the disk of the finchDiskSize value is created. Therefore, resetDisks(o, installed) need to be executed before running this test.

This is because the value of finchDiskSize is effective only when the persistent disk dose not exit and finch vm init has been executed.

Therefore, at this time, after updating finchDiskSize in ~/.finch/finch.yaml and running finch vm start, the allocated size for finch dose not change.

This behavior can be seen in the following steps.

Details

haytok finch
> cat ~/.finch/finch.yaml | grep finchDiskSize
finchDiskSize: 3GiB
haytok finch
> _output/bin/finch vm init
INFO[0000] Initializing and starting Finch virtual machine...
INFO[0052] Finch virtual machine started successfully
haytok finch
> LIMA_HOME=/Users/haytok/finch/_output/lima/data/ /Users/haytok/finch/_output/lima/bin/limactl shell finch df -h
Filesystem             Size  Used Avail Use% Mounted on
...
/dev/vdb1              2.9G  136K  2.8G   1% /mnt/lima-finch
...
haytok finch
> _output/bin/finch vm stop
INFO[0000] Stopping existing Finch virtual machine...
INFO[0004] Finch virtual machine stopped successfully
# updated finchDiskSize in finch.yaml
haytok finch
> cat ~/.finch/finch.yaml | grep finchDiskSize
finchDiskSize: 4GiB
# The persistent data has not been deleted.
haytok finch
> _output/bin/finch vm start
INFO[0000] Starting existing Finch virtual machine...
INFO[0041] Finch virtual machine started successfully
haytok finch
> LIMA_HOME=/Users/haytok/finch/_output/lima/data/ /Users/haytok/finch/_output/lima/bin/limactl shell finch df -h
Filesystem             Size  Used Avail Use% Mounted on
...
/dev/vdb1              2.9G  136K  2.8G   1% /mnt/lima-finch
...

The size of the allocated space for finch dose not change because of below implementations.

As described below, when disk exists in ~/.finch/.disks and finchDiskSize has changed, the size for finch dose not resize because disk dose not recreated.

haytok finch
> LIMA_HOME=/Users/haytok/finch/_output/lima/data/ /Users/haytok/finch/_output/lima/bin/limactl disk ls
NAME     SIZE    DIR                                                   IN-USE-BY
finch    3GiB    /Users/haytok/finch/_output/lima/data/_disks/finch    finch

In the current implementation, when we update finchDiskSize and then finch vm stop and finch vm start, we need to delete ~/.finch/.disks to change the disk size for finch.

Or, when the disk exists in ~/.finch/.disks, we need to add a feature to resize the disk size on Lima side as described in the following issue, and use the feature when running finch vm init and finch vm start. However, the following feature has not been implemented at this time.

From the above, in the implementation at this time, the size allocated for finch space dose not change when finchDiskSize has been updated and, finch vm stop and finch vm start. Therefore, in order to pass this test, resetDisks(o, installed) have to be run before finch vm init.

I think it is useful to set the size allocated for finch when running finch vm init, but this PR dose not fully resolve issue#275, so I'll close this PR if the maintainers determine that this PR is no longer needed. I'll add the feature to resize disk size for finch on the Lima size and recreate the PR to resolve issue#275.

@Shubhranshu153
Copy link
Contributor

@haytok Closing this pr as per above communications. Feel free to re-open it if you needs to reviewed again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants