You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all, I am a new user of glusterFs and my target is to build an infra with 3 servers on dispersed model(2+1). On a second time if we need more stockage i will update the configuration on dispersed-distributed model (4+2). I am testing the first config on 3 vms (2+1) and all is ok.
//here the result of gluster volume info before add 3 new nodes// Volume Name: gfs
Type: Disperse
Volume ID: 3828885c-5be6-4338-9025-03f404150205
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: CIBLE01:/data/glusterfs/myvol1/brick1/brick
Brick2: CIBLE02:/data/glusterfs/myvol1/brick1/brick
Brick3: CIBLE03:/data/glusterfs/myvol1/brick1/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
auth.allow: 192.168.0.*
Then I would like to add 3 new Vms (CIBLE04,CIBLE05,CIBLE06) in my volume named "gfs"
I use command : gluster peer probe CIBLE04; gluster peer probe CIBLE05; gluster peer probe CIBLE06
//here is the result of gluster peer status after probe 3 new servers on my server CIBLE01// Number of Peers: 5
Hostname: CIBLE03.neosaiyan
Uuid: 43e2aab2-000e-4f8a-b2aa-ea66f7bbe149
State: Peer in Cluster (Connected)
Hostname: CIBLE02.neosaiyan
Uuid: dff69f3f-c0d5-4b4c-9d53-4234a1ca5d66
State: Peer in Cluster (Connected)
Hostname: CIBLE04.neosaiyan
Uuid: 4df5a96f-bb9a-4aa6-a254-f431c7e70c75
State: Peer in Cluster (Connected)
Hostname: CIBLE05.neosaiyan
Uuid: 323d38fc-fb28-486c-b335-896649fdeb5b
State: Peer in Cluster (Connected)
Hostname: CIBLE06.neosaiyan
Uuid: 119bdefe-f3a4-430e-bb1c-3df6802806e0
State: Accepted peer request (Connected)
All is OK again. Now I want to add the 3 bricks of each servers with the command :
//gluster volume add-brick gfs CIBLE04:/data/glusterfs/myvol1/brick1/brick CIBLE05:/data/glusterfs/myvol1/brick1/brick CIBLE06:/data/glusterfs/myvol1/brick1/brick
I have : volume add-brick: success
//here the result of gluster volume info after adding the 3 new nodes Volume Name: gfs
Type: Distributed-Disperse
Volume ID: 3828885c-5be6-4338-9025-03f404150205
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: CIBLE01:/data/glusterfs/myvol1/brick1/brick
Brick2: CIBLE02:/data/glusterfs/myvol1/brick1/brick
Brick3: CIBLE03:/data/glusterfs/myvol1/brick1/brick
Brick4: CIBLE04:/data/glusterfs/myvol1/brick1/brick
Brick5: CIBLE05:/data/glusterfs/myvol1/brick1/brick
Brick6: CIBLE06:/data/glusterfs/myvol1/brick1/brick
Options Reconfigured:
auth.allow: is_synctasked
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
//here the result of gluster volume status
Status of volume: gfs
Gluster process TCP Port RDMA Port Online Pid
Brick CIBLE01:/data/glusterfs/myvol1/brick1
/brick N/A N/A N N/A
Brick CIBLE02:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 488
Brick CIBLE03:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 492
Brick CIBLE04:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 481
Brick CIBLE05:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 477
Brick CIBLE06:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 477
Self-heal Daemon on localhost N/A N/A Y 510
Self-heal Daemon on CIBLE02.neosaiyan N/A N/A Y 499
Self-heal Daemon on CIBLE04.neosaiyan N/A N/A Y 460
Self-heal Daemon on CIBLE03.neosaiyan N/A N/A Y 503
Self-heal Daemon on CIBLE06.neosaiyan N/A N/A Y 456
Self-heal Daemon on CIBLE05.neosaiyan N/A N/A Y 456
Task Status of Volume gfs
There are no active volume tasks
You can see that the server CIBLE01 in now offline. Do you have any idea ?
The rebalance is completed but i have another problem
- the same files are on CIBLE01,CIBLE02 and CIBLE03 (first 3 servers) but not in CIBLE04,05,06
- I try to do a touch of a file on CIBLE04, file is present only on this server
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi all, I am a new user of glusterFs and my target is to build an infra with 3 servers on dispersed model(2+1). On a second time if we need more stockage i will update the configuration on dispersed-distributed model (4+2). I am testing the first config on 3 vms (2+1) and all is ok.
//here the result of gluster volume info before add 3 new nodes//
Volume Name: gfs
Type: Disperse
Volume ID: 3828885c-5be6-4338-9025-03f404150205
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: CIBLE01:/data/glusterfs/myvol1/brick1/brick
Brick2: CIBLE02:/data/glusterfs/myvol1/brick1/brick
Brick3: CIBLE03:/data/glusterfs/myvol1/brick1/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
auth.allow: 192.168.0.*
Then I would like to add 3 new Vms (CIBLE04,CIBLE05,CIBLE06) in my volume named "gfs"
I use command : gluster peer probe CIBLE04; gluster peer probe CIBLE05; gluster peer probe CIBLE06
//here is the result of gluster peer status after probe 3 new servers on my server CIBLE01//
Number of Peers: 5
Hostname: CIBLE03.neosaiyan
Uuid: 43e2aab2-000e-4f8a-b2aa-ea66f7bbe149
State: Peer in Cluster (Connected)
Hostname: CIBLE02.neosaiyan
Uuid: dff69f3f-c0d5-4b4c-9d53-4234a1ca5d66
State: Peer in Cluster (Connected)
Hostname: CIBLE04.neosaiyan
Uuid: 4df5a96f-bb9a-4aa6-a254-f431c7e70c75
State: Peer in Cluster (Connected)
Hostname: CIBLE05.neosaiyan
Uuid: 323d38fc-fb28-486c-b335-896649fdeb5b
State: Peer in Cluster (Connected)
Hostname: CIBLE06.neosaiyan
Uuid: 119bdefe-f3a4-430e-bb1c-3df6802806e0
State: Accepted peer request (Connected)
All is OK again. Now I want to add the 3 bricks of each servers with the command :
//gluster volume add-brick gfs CIBLE04:/data/glusterfs/myvol1/brick1/brick CIBLE05:/data/glusterfs/myvol1/brick1/brick CIBLE06:/data/glusterfs/myvol1/brick1/brick
I have : volume add-brick: success
//here the result of gluster volume info after adding the 3 new nodes
Volume Name: gfs
Type: Distributed-Disperse
Volume ID: 3828885c-5be6-4338-9025-03f404150205
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: CIBLE01:/data/glusterfs/myvol1/brick1/brick
Brick2: CIBLE02:/data/glusterfs/myvol1/brick1/brick
Brick3: CIBLE03:/data/glusterfs/myvol1/brick1/brick
Brick4: CIBLE04:/data/glusterfs/myvol1/brick1/brick
Brick5: CIBLE05:/data/glusterfs/myvol1/brick1/brick
Brick6: CIBLE06:/data/glusterfs/myvol1/brick1/brick
Options Reconfigured:
auth.allow: is_synctasked
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
//here the result of gluster volume status
Status of volume: gfs
Gluster process TCP Port RDMA Port Online Pid
Brick CIBLE01:/data/glusterfs/myvol1/brick1
/brick N/A N/A N N/A
Brick CIBLE02:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 488
Brick CIBLE03:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 492
Brick CIBLE04:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 481
Brick CIBLE05:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 477
Brick CIBLE06:/data/glusterfs/myvol1/brick1
/brick 49152 0 Y 477
Self-heal Daemon on localhost N/A N/A Y 510
Self-heal Daemon on CIBLE02.neosaiyan N/A N/A Y 499
Self-heal Daemon on CIBLE04.neosaiyan N/A N/A Y 460
Self-heal Daemon on CIBLE03.neosaiyan N/A N/A Y 503
Self-heal Daemon on CIBLE06.neosaiyan N/A N/A Y 456
Self-heal Daemon on CIBLE05.neosaiyan N/A N/A Y 456
Task Status of Volume gfs
There are no active volume tasks
You can see that the server CIBLE01 in now offline. Do you have any idea ?
I try a rebalance
Node Rebalanced-files size scanned failures skipped status run time in h:m:s
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
CIBLE03.neosaiyan 0 0Bytes 2 0 0 completed 0:00:05
CIBLE02.neosaiyan 0 0Bytes 2 0 0 completed 0:00:05
CIBLE04.neosaiyan 0 0Bytes 0 0 0 completed 0:00:04
CIBLE05.neosaiyan 0 0Bytes 0 0 0 completed 0:00:05
CIBLE06.neosaiyan 0 0Bytes 0 0 0 completed 0:00:05
localhost 0 0Bytes 0 0 0 completed 0:00:00
volume rebalance: gfs: success
The rebalance is completed but i have another problem
- the same files are on CIBLE01,CIBLE02 and CIBLE03 (first 3 servers) but not in CIBLE04,05,06
- I try to do a touch of a file on CIBLE04, file is present only on this server
Any help will be very appreciated !
Thanks in advance,
Micka
Beta Was this translation helpful? Give feedback.
All reactions