Gluster architecture help #3409
-
Hello all, My intent is to layer gluster onto zfs bricks. The primary reason being ZFS can use my nvme drive as a cache disk to improve read/write performance. Each node has 10 HDD platter disks, these are configured in raidz2 with nvme as cache. I have 5 nodes, and I did some testing. First, I tried gluster dispersed 4+1 redundancy so I could lose 1 node without issue. This seems to perform very well. My delimma is in this configuration, to expand, I have to add 5 nodes at a time moving forward, correct? Second question is, would it be better to just treat each of the 10 disks as a brick, eliminating the ZFS layer and NVME cache, and do something like 8+2? The performance impacts of increasing bricks and redundancy erasure code level isn't clear to me, so hoping someone with experience can chime in. With the setup above, I'm also losing a lot of space to parity data with the zfs+erasure code loss. So I'm considering tearing it down and rebuilding it 8+2 but curious on others thoughts. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 4 replies
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
I ended up not layering on zfs and just doing xfs on each drive as a brick. In my testing, it's actually performing better than when layered on zfs. I'm doing a mass copy over now so we will see how it does once it has some load and data usage. |
Beta Was this translation helpful? Give feedback.
-
Hello Edrock200, |
Beta Was this translation helpful? Give feedback.
Hi,
answer to question 1 is yes, you'll need 5 more bricks to create distributed dispersed volume. About question 2: just try it ;) btw, do you really need raidz2, isn't fault tolerance of raidz1 + regular scrubbing enough?