You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This has been rumbling along in the background for several years, and I've finally got concrete proof that completely filling SD cards is a bad thing to do.
You can get performant SD cards from major vendors (so far, Samsung and Sandisk) to significantly degrade if the root filesystem usage approaches 100%. Sequential write speed for new files decreases significantly, but more disconcertingly, files written ages ago become slower to read.
My guess is that there is hot/cold reclaim going on, and in the fully utilised case cold flash blocks are sacrificed on the altar of not-completely-killing-write-performance. fstrim can't save you if the fs is nearly full, and it appears that significant free space extents are required for the card to recover performance.
Overprovisioning flash storage has been a thing for ages - by downsizing the "allocated" space in terms of the sum of partition sizes, certain sectors are never written to, so the flash firmware is free to do garbage collection with them so reducing write amplification. Note that in the article above, SSDs have some internal overprovisioned space not available for host use - in the very cost-constrained world of SD cards, this space is going to be tiny if it exists at all.
Here TARGET_END is always the last block of the block device - overprovisioning here would mean reducing TARGET_END by a fraction of ROOT_DEV_SIZE.
But what to set that fraction to? In that article, Kingston recommend 7%. Sabrent recommend 10%. I can't find any recommendations for SD cards. I suppose I should run some tests.
The text was updated successfully, but these errors were encountered:
This has been rumbling along in the background for several years, and I've finally got concrete proof that completely filling SD cards is a bad thing to do.
You can get performant SD cards from major vendors (so far, Samsung and Sandisk) to significantly degrade if the root filesystem usage approaches 100%. Sequential write speed for new files decreases significantly, but more disconcertingly, files written ages ago become slower to read.
My guess is that there is hot/cold reclaim going on, and in the fully utilised case cold flash blocks are sacrificed on the altar of not-completely-killing-write-performance. fstrim can't save you if the fs is nearly full, and it appears that significant free space extents are required for the card to recover performance.
https://www.kingston.com/unitedkingdom/en/blog/pc-performance/overprovisioning
Overprovisioning flash storage has been a thing for ages - by downsizing the "allocated" space in terms of the sum of partition sizes, certain sectors are never written to, so the flash firmware is free to do garbage collection with them so reducing write amplification. Note that in the article above, SSDs have some internal overprovisioned space not available for host use - in the very cost-constrained world of SD cards, this space is going to be tiny if it exists at all.
raspi-config/usr/lib/raspi-config/init_resize.sh
Line 60 in 4832cbd
Here TARGET_END is always the last block of the block device - overprovisioning here would mean reducing TARGET_END by a fraction of ROOT_DEV_SIZE.
But what to set that fraction to? In that article, Kingston recommend 7%. Sabrent recommend 10%. I can't find any recommendations for SD cards. I suppose I should run some tests.
The text was updated successfully, but these errors were encountered: