Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Files with too many partial slabs when added via rclone mount #1090

Closed
Druffib opened this issue Mar 21, 2024 · 4 comments
Closed

Files with too many partial slabs when added via rclone mount #1090

Druffib opened this issue Mar 21, 2024 · 4 comments

Comments

@Druffib
Copy link

Druffib commented Mar 21, 2024

Current Behavior

When copying files into an rclone (v1.64.0) mounted drive, the files in renterd UI are showing too many partial slabs:

image

Files uploaded via the renterd ui are showing expected health:

image

Expected Behavior

I would expect that there's at most 1 partial slab waiting for upload in large files ( >>40MB)

Steps to Reproduce

  1. Setup a drive mount via renterd-s3 and rclone
  2. Copy files to the drive
  3. Observe partial slabs

Version

v1.0.6-beta.1

What operating system did the problem occur on (e.g. Ubuntu 22.04, macOS 12.0, Windows 11)?

Windows 11

Autopilot Config

{ "contracts": { "set": "autopilot", "amount": 80, "allowance": "3110300039000000000000000000", "period": 6048, "renewWindow": 2016, "download": 700000000000, "upload": 700000000000, "storage": 500000000000, "prune": false }, "hosts": { "allowRedundantIPs": false, "maxDowntimeHours": 336, "minRecentScanFailures": 10, "scoreOverrides": null } }

Bus Config

{ "default": "autopilot" } { "hostBlockHeightLeeway": 6, "maxContractPrice": "1000000000000000000000000", "maxDownloadPrice": "2221642885000000000000000000", "maxRPCPrice": "1000000000000000000000", "maxStoragePrice": "154280755787", "maxUploadPrice": "333246433000000000000000000", "migrationSurchargeMultiplier": 10, "minAccountExpiry": 86400000000000, "minMaxCollateral": "10000000000000000000000000", "minMaxEphemeralAccountBalance": "1000000000000000000000000", "minPriceTableValidity": 300000000000 } { "minShards": 10, "totalShards": 30 } { "enabled": true, "slabBufferMaxSizeSoft": 4294967296 }

Contract Set Contracts

80

Anything else?

No response

@Druffib Druffib added the bug Something isn't working label Mar 21, 2024
@github-project-automation github-project-automation bot moved this to Triage in Sia Mar 21, 2024
@mike76-dev
Copy link
Contributor

Isn't it because the chunk size set in rclone isn't aligned with the Sia slab size?

@Druffib
Copy link
Author

Druffib commented Mar 21, 2024

Isn't it because the chunk size set in rclone isn't aligned with the Sia slab size?

No idea, I followed this guide to set it up : https://docs.sia.tech/sia-integrations/s3-integrations/rclone

@ChrisSchinnerl
Copy link
Member

Hi @Druffib. As Mike already said correctly that's because rclone uploads files in small chunks. We will need to update our docs to make people aware of that.

In the meantime you can try this
rclone sync <src> <dst> --s3-chunk-size 40MiB --fast-list

Assuming you are using the default redundancy settings, Sia's slab size is 40MiB. By setting s3-chunk-size to match that you should only receive at most one partial slab per object. --fast-list is a nice addition for sync since it will speed things up a lot if you have lots of files.

@ChrisSchinnerl ChrisSchinnerl removed the bug Something isn't working label Mar 22, 2024
@ChrisSchinnerl ChrisSchinnerl removed the status in Sia Mar 22, 2024
@Druffib
Copy link
Author

Druffib commented Mar 22, 2024

Excellent, it's looking better with the suggested switches. Though, rclone tells me that --fast-list does nothing for a mount. Additionally, even before changing the rclone arguments, overnight the partial slabs appear to have uploaded and are showing expected redundancy status now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

No branches or pull requests

3 participants