Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

btrfs balance single targeting a single devid didn't move all the data to that device #907

Open
gdevenyi opened this issue Oct 10, 2024 · 3 comments
Labels

Comments

@gdevenyi
Copy link

I executed:
btrfs balance start --force -sconvert=single,devid=2 -dconvert=single,devid=2 -mconvert=single,devid=2 /storage

With the intention of moving devices 1,3,4 and 5 from my btrfs filesystem.

The command ran overnight and I found afterwards that device 4 and 5 had been vacated of data, but devices 1 and 2 had equal amounts, although everything was now stored as "single"

@Zygo
Copy link

Zygo commented Oct 11, 2024

The devid filter selects block groups based on which devices they currently occupy. So your command is asking for everything currently allocated on device 2 to be relocated onto whichever devices are preferred by the target raid profile. For single profile, this is the device with the most free space, with data distributed across multiple devices when multiple devices have equal free space. Given your statement of the final result, the largest devices were likely devices 1 and 2, since that was where the data ended up.

There currently isn't a good way to move data from all devices to one device while also changing profile without moving the data multiple times. You can alternate between resizing devices 1, 3, 4, and 5 smaller, in 1 GiB increments, so that they have less unallocated space than device 2, then perform some of the conversion to single when device 2 has more unallocated space than the others, then stop the balance when the unallocated space is equal and go back to resizing the other devices smaller again, repeating all of that until all data has been removed from devices 1, 3, 4, 5. This reduces the number of data movements, but it requires a shell for loop or a small python-btrfs script to control the raw kernel ioctls and handle the switches between resizing and balancing.

If this is a feature request: it's a fairly straightforward patch to disable allocation on some devices (a variant of the existing allocation preferences patch with the "allocate nothing" extension). Once that is merged, then this operation can be performed in two steps:

  1. Disable allocation on devices 1, 3, 4, and 5 (set preference to "none" or whatever name the final implementation uses)
  2. Balance using the command line above with no devid filter: btrfs balance start -dconvert=single -mconvert=dup /storage

With allocation disabled on devices other than device 2, balance will have no choice but to reallocate all the data there.

@gdevenyi
Copy link
Author

Given your statement of the final result, the largest devices were likely devices 1 and 2, since that was where the data ended up.

Yes, this is correct.

If this is a feature request

I guess it is now, since it is not currently possible to "un-balance" data off of disk in preparation for removal. It sounds like the no allocation preferences will address this.

@kdave kdave added the bug label Nov 8, 2024
@kakra
Copy link

kakra commented Dec 4, 2024

If this is a feature request: it's a fairly straightforward patch to disable allocation on some devices (a variant of the existing allocation preferences patch with the "allocate nothing" extension). Once that is merged, then this operation can be performed in two steps:

I've implemented a none-preferred mode in my patch set. I wanted to avoid a none-only to avoid unexpected out-of-space situations. For the use-case of @gdevenyi, if data still allocated to the none-preferred device after balance, there's always a chance to add more new space and balance that device-id again.

kakra/linux#36

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants