-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue: 498 meta service long run fix #523
Conversation
Codecov ReportAttention: Patch coverage is
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## master #523 +/- ##
===========================================
+ Coverage 56.51% 67.38% +10.86%
===========================================
Files 108 109 +1
Lines 10300 10424 +124
Branches 1402 1398 -4
===========================================
+ Hits 5821 7024 +1203
+ Misses 3894 2718 -1176
- Partials 585 682 +97 ☔ View full report in Codecov by Sentry. |
@@ -81,7 +81,7 @@ def meta_nightly(options, addln_opts): | |||
subprocess.check_call(options.dirpath + "test_meta_blk_mgr " + cmd_opts + addln_opts, stderr=subprocess.STDOUT, | |||
shell=True) | |||
|
|||
cmd_opts = "--min_write_size=10485760 --max_write_size=104857600 --bitmap=1" | |||
cmd_opts = "--min_write_size=10485760 --max_write_size=80857600 --bitmap=1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why 80857600
, is it upper limit of total nblks in the portion ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, it is a little smaller than the upper limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JFYI: it translate to 2.4TB if blk size = 4K when storing bit map:)
But maybe we are looking for 10x of this or even more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @xiaoxichen for bring it up. My thought is to change the chunk-size to uint64_t, and each chunk can have larger size, key thing here is we don't persist the whole data blkstore in one bitmap, we persist per blkallocator instance which is per chunk. So the bitmap size will be mapping to how much the chunk size is.
Another thing is metablkstore has compression feature so the bitmap can be significantly compressed before written.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok if it is per allocator then we are good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, we are good as in nublox bitmap. We only update bitmap meta blks when specific chunk is dirtied. Undirtied chunk's bitmap is untouched.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Problem Desc:
We have a limitation on max chunk size (uint32_t) which is 4GB, which will be carved into portions in blk allocator with each portion maps to tens of thousands of nblks available for each portion. This specific test case specifiy max io size to to limitation that is larger than the portion total nblks and during the test, the io size is randomized and when it comes to larger than the limit, it can't find available allocation in any chunk so return failed.
Fix:
Test:
Specific failed test (previously failing consistently) can pass with this.