0845680984
  1. Install GlusterFS
  2. Distributed Configuration
  3. GlusterFS Client
  4. GlusterFS + NFS-Ganesha
  5. Add Nodes (Bricks)
  6. Remove Nodes (Bricks)
  7. Replication Configuration
  8. Distributed + Replication
  9. Dispersed Configuration

Remove Nodes (Bricks) from existing Cluster.
For example, Remove a Node [node03] from the existing Cluster like follows.

+----------------------+          |          +----------------------+
| [GlusterFS Server#1] |10.0.0.51 | 10.0.0.52| [GlusterFS Server#2] |
|   glus01.srv.local   +----------+----------+   glus02.srv.local   |
|                      |          |          |                      |
+----------------------+          |          +----------------------+
           ⇑                      |                      ⇑
     file1, file3 ...             |               file2, file4 ...
                                  |
+----------------------+          |
| [GlusterFS Server#3] |10.0.0.53 |
|   glus03.srv.local   +----------+
|                      |
+----------------------+

[1] Remove a New Node from existing Cluster on a node. (OK on any existing node except removing target)

# confirm volume info
root@glus01:~# gluster volume info

Volume Name: vol_distributed
Type: Distribute
Volume ID: 8c8f0cb5-1833-4e7f-92b0-7842574bb0e4
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: glus01:/glusterfs/distributed
Brick2: glus02:/glusterfs/distributed
Brick3: glus03:/glusterfs/distributed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

# start removing node from volume
# rebalance volume is also run
root@glus01:~# gluster volume remove-brick vol_distributed glus03:/glusterfs/distributed start
It is recommended that remove-brick be run with cluster.force-migration option disabled to prevent possible data corruption. Doing so will ensure that files that receive writes during migration will not be migrated and will need to be manually copied after the remove-brick commit operation. Please check the value of the option and update accordingly.
Do you want to continue with your current cluster.force-migration settings? (y/n) y
volume remove-brick start: success
ID: ebec7632-fd86-48d6-a01b-e311f2c95458

# confirm status
root@glus01:~# gluster volume remove-brick vol_distributed glus03:/glusterfs/distributed status
     Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
   glus03                0        0Bytes             0             0             0            completed        0:00:00

# after [status] turning to [completed], commit removing
root@glus01:~# gluster volume remove-brick vol_distributed glus03:/glusterfs/distributed commit
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount point before re-purposing the removed brick.

# confirm volume info
root@glus01:~# gluster volume info

Volume Name: vol_distributed
Type: Distribute
Volume ID: 8c8f0cb5-1833-4e7f-92b0-7842574bb0e4
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: glus01:/glusterfs/distributed
Brick2: glus02:/glusterfs/distributed
Options Reconfigured:
performance.client-io-threads: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

Leave a Comment

Your email address will not be published. Required fields are marked *

Bài viết gần đây:

Shopping Cart