0845680984
  1. Install GlusterFS
  2. Distributed Configuration
  3. GlusterFS Client
  4. GlusterFS + NFS-Ganesha
  5. Add Nodes (Bricks)
  6. Remove Nodes (Bricks)
  7. Replication Configuration
  8. Distributed + Replication
  9. Dispersed Configuration

Add Nodes (Bricks) to existing Cluster.
For example, Add a Node [node03] to the existing Cluster like follows.

+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   glus01.srv.local   +----------+----------+   glus02.srv.local   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   glus03.srv.local   +----------+
|                      |
+----------------------+

[1] Install GlusterFS to a New Node, refer to here, and then Create a directory for GlusterFS volume on the same Path with other Nodes.

[2] Add a New Node to existing Cluster on a node. (OK on any existing node)

# probe new node
root@glus01:~# gluster peer probe glus03
peer probe: success.
# confirm status
root@node01:~# gluster peer status
Number of Peers: 2

Hostname: glus02
Uuid: f2fce535-c10e-41cb-8c9f-c6636ae38eff
State: Peer in Cluster (Connected)

Hostname: glus03
Uuid: 014a1e8f-967d-4709-bac4-ea1de8ef96cb
State: Peer in Cluster (Connected)

# confirm existing volume
root@glus01:~# gluster volume info

Volume Name: vol_distributed
Type: Distribute
Volume ID: 8c8f0cb5-1833-4e7f-92b0-7842574bb0e4
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: glus01:/glusterfs/distributed
Brick2: glus02:/glusterfs/distributed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

# add new node
root@glus01:~# gluster volume add-brick vol_distributed glus03:/glusterfs/distributed
volume add-brick: success
# confirm volume info
root@glus01:~# gluster volume info

Volume Name: vol_distributed
Type: Distribute
Volume ID: 8c8f0cb5-1833-4e7f-92b0-7842574bb0e4
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: glus01:/glusterfs/distributed
Brick2: glus02:/glusterfs/distributed
Brick3: glus03:/glusterfs/distributed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

# after adding new node, run rebalance volume
root@glus01:~# gluster volume rebalance vol_distributed fix-layout start
volume rebalance: vol_distributed: success: Rebalance on vol_distributed has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 47914d1c-22b1-4de0-9cd3-7d271ff8db53

# OK if [Status] turns to [completed]
root@glus01:~# gluster volume status
Status of volume: vol_distributed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glus01:/glusterfs/distributed         49152     0          Y       2630
Brick glus02:/glusterfs/distributed         49152     0          Y       2592
Brick glus03:/glusterfs/distributed         49152     0          Y       3427

Task Status of Volume vol_distributed
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : 47914d1c-22b1-4de0-9cd3-7d271ff8db53
Status               : completed

Leave a Comment

Your email address will not be published. Required fields are marked *

Bài viết gần đây:

Shopping Cart