-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
drain host is not filling target OSDs evenly #23
Comments
Hello! Indeed, that is a weakness of I hesitated to implement this into the initial tool because it seemed it would add considerable complexity to I'll add this to our internal TODO list, without any promises about turnaround time, but, as always, contribution are welcome! |
Sorry, I missed that part obviously.
A Constraint as in that you drain from a fixed OSD -> fixed OSD? That is what I'm up to next. Create a loop over pgremapper and have 1 backfill per OSD to 1 fixed other OSD. In that case you know beforehand you are not going to fill up an OSD too full, or create imbalance.
Adjusting the scoring algorithm seems like the best approach to me. |
Exactly, and the scheme you describe is what we have historically done. |
First: thanks for the tool. It's really useful. We use it to drain one host at a time (with pgremapper looping over its OSDs). We noticed however that the target OSDs are not evenly filled up (from a PG count view, or usage for that matter). This in-balance might get really big (i.e. more that 40 PGs). We can straighten that out mid-process by stopping new remaps and use the "balance-host" option. But it's a waste of time. It would be better to get those PGs on the right OSD the first time.
I have taken a look at the code and if I understand correctly the decision to make what OSD is used as the target is handled in function "calcPgMappingsToUndoUpmaps". It does not seem to take into account how many PGs are already on the OSD. Is that correct? Or is there some heuristic that does take this into account?
Note: we do not clear any upmaps on the target host before running pgremapper. Might this influence mapping decisions?
The text was updated successfully, but these errors were encountered: