gluster故障硬盤更換

故障問題:

本機處理:

[root@node2 ~]# gluster volume status

Status of volume: testvol

Gluster process? ? ? ? ? ? ? ? ? ? ? ? ? ? TCP Port? RDMA Port? Online? Pid

------------------------------------------------------------------------------

Brick node1:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2684

Brick node2:/export/sdb1/brick? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? N? ? ? N/A? ? # sdb顯示不在線

Brick node1:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2703

Brick node2:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2704

Brick node3:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2197

Brick node4:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2207

Brick node3:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2216

Brick node4:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2226

Self-heal Daemon on localhost? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 1393

Self-heal Daemon on node1? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 3090

Self-heal Daemon on node4? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 2246

Self-heal Daemon on node3? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 2236

Task Status of Volume testvol

------------------------------------------------------------------------------

Task? ? ? ? ? ? ? ? : Rebalance

ID? ? ? ? ? ? ? ? ? : 8b3a04a0-0449-4424-a458-29f602571ea2

Status? ? ? ? ? ? ? : completed

從上方看到 Brick node2:/export/sdb1/brick 不在線,出現了問題

解決:

1.創建新的數據目錄,將備用的硬盤格式化,掛載到系統中去(故障主機上執行)

mkfs.xfs -i size=512 /dev/sdd1? #格式化

mkdir /export/sdd1/brick -p? #建立相關的目錄

mount /dev/sdd1 /export/sdd1 #掛載

echo "/dev/sdd1 /export/sdd1 xfs defaults 0 0"? >> /etc/fstab #加入開機啟動

2. 查詢故障點的目錄的擴展屬性(正常主機執行)

3. 掛載卷并觸發自愈(故障主機執行)

[root@node2 ~]# mount -t glusterfs node2:/testvol /mnt? #掛載點隨便,不重復就可以,node2:/testvol為之前生成的卷

[root@node2 ~]# mkdir /mnt/test #新建一個卷中不存在的目錄并刪除,根據你的掛載點的位置變換執行

[root@node2 ~]# rmdir /mnt/test

[root@node2 ~]# setfattr -n trusted.non-existent-key -v abc /mnt? # 設置擴展屬性觸發自愈

[root@node2 ~]# setfattr -x trusted.non-existent-key /mnt # 設置擴展屬性觸發自愈

4.檢查當前節點是否掛起

正常的主機執行

[root@node1 gluster]# getfattr -d -m. -e hex /export/sdb1/brick/? # /export/sdb1/brick/ 你建立brick的位置

getfattr: Removing leading '/' from absolute path names

# file: export/sdb1/brick/

security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000

trusted.afr.dirty=0x000000000000000000000000

trusted.afr.testvol-client-1=0x000000000000000400000004? <<---- xattrs are marked from source brick node1:/export/sdb1/brick--->>

trusted.gfid=0x00000000000000000000000000000001

trusted.glusterfs.dht=0x0000000100000000000000003ffffffe

trusted.glusterfs.dht.commithash=0x3334343336363233303800

trusted.glusterfs.volume-id=0xe107222fa1134606a9a7fcb16e4c0709

故障主機執行(正常是否可以):

[root@node2 gluster]# gluster volume heal testvol info

Brick node1:/export/sdb1/brick

/

Status: Connected

Number of entries: 1

Brick node2:/export/sdb1/brick

Status: Transport endpoint is not connected

Number of entries: -? ? # 狀態顯示傳輸端點未連接

Brick node1:/export/sdc1/brick

Status: Connected

Number of entries: 0

Brick node2:/export/sdc1/brick

Status: Connected

Number of entries: 0

Brick node3:/export/sdb1/brick

Status: Connected

Number of entries: 0

Brick node4:/export/sdb1/brick

Status: Connected

Number of entries: 0

Brick node3:/export/sdc1/brick

Status: Connected

Number of entries: 0

Brick node4:/export/sdc1/brick

Status: Connected

Number of entries: 0

5. 使用強制提交完成操作

故障機執行

[root@node2 ~]# gluster volume replace-brick testvol node2:/export/sdb1/brick node2:/export/sdd1/brick commit force

volume replace-brick: success: replace-brick commit force operation successful? # 提示成功

[root@node2 ~]# gluster volume status

Status of volume: testvol

Gluster process? ? ? ? ? ? ? ? ? ? ? ? ? ? TCP Port? RDMA Port? Online? Pid

------------------------------------------------------------------------------

Brick node1:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2684

Brick node2:/export/sdd1/brick? ? ? ? ? ? ? 49154? ? 0? ? ? ? ? Y? ? ? 10298? #在線盤已經是sdd

Brick node1:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2703

Brick node2:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2704

Brick node3:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2197

Brick node4:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2207

Brick node3:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2216

Brick node4:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2226

Self-heal Daemon on localhost? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10307

Self-heal Daemon on node3? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 9728

Self-heal Daemon on node1? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 3284

Self-heal Daemon on node4? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 9736

Task Status of Volume testvol

------------------------------------------------------------------------------

Task? ? ? ? ? ? ? ? : Rebalance

ID? ? ? ? ? ? ? ? ? : 8b3a04a0-0449-4424-a458-29f602571ea2

Status? ? ? ? ? ? ? : not started

正常主機執行

[root@node1 gluster]# getfattr -d -m. -e hex /export/sdb1/brick/

getfattr: Removing leading '/' from absolute path names

# file: export/sdb1/brick/

security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000

trusted.afr.dirty=0x000000000000000000000000

trusted.afr.testvol-client-1=0x000000000000000000000000? <<---- Pending changelogs are cleared.

trusted.gfid=0x00000000000000000000000000000001

trusted.glusterfs.dht=0x0000000100000000000000003ffffffe

trusted.glusterfs.dht.commithash=0x3334343336363233303800

trusted.glusterfs.volume-id=0xe107222fa1134606a9a7fcb16e4c0709

[root@node2 ~]# gluster volume heal testvol info

Brick node1:/export/sdb1/brick

Status: Connected

Number of entries: 0

Brick node2:/export/sdd1/brick

Status: Connected

Number of entries: 0

Brick node1:/export/sdc1/brick

Status: Connected

Number of entries: 0

Brick node2:/export/sdc1/brick

Status: Connected

Number of entries: 0

Brick node3:/export/sdb1/brick

Status: Connected

Number of entries: 0

Brick node4:/export/sdb1/brick

Status: Connected

Number of entries: 0

Brick node3:/export/sdc1/brick

Status: Connected

Number of entries: 0

Brick node4:/export/sdc1/brick

Status: Connected

Number of entries: 0

方法二:跨主機的同步

假設node2 的sdb1有問題

加入新主機node5,node5磁盤格式化,掛載和gluster的安裝過程同上:

將node5加入信任池

[root@node1 brick]# gluster peer probe node5

peer probe: success.

掛載磁盤

[root@node5 ~]# mkdir -p /export/sdb1 && mount /dev/sdb1 /export/sdb1

[root@node5 ~]# echo "/dev/sdb1 /export/sdb1 xfs defaults 0 0"? >> /etc/fstab

[root@node5 ~]#? mount -a && mount

執行下邊的命令:

[root@node5 ~]# gluster volume replace-brick testvol node2:/export/sdb1/brick node5:/export/sdb1/brick? commit force

volume replace-brick: success: replace-brick commit force operation successful

頂替的可以繼續使用,也可以在sdb1恢復后,數據倒回,命令如下

[root@node2 ~]# gluster volume replace-brick testvol node5:/export/sdb1/brick? node2:/export/sdb1/brick commit force

volume replace-brick: success: replace-brick commit force operation successful

替換之前的狀態:

[root@node1 brick]# gluster volume status

Status of volume: testvol

Gluster process? ? ? ? ? ? ? ? ? ? ? ? ? ? TCP Port? RDMA Port? Online? Pid

------------------------------------------------------------------------------

Brick node1:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2085

Brick node5:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 18229

Brick node1:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2076

Brick node2:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2131

Brick node3:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2197

Brick node4:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2207

Brick node3:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2216

Brick node4:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2226

Self-heal Daemon on localhost? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10565

Self-heal Daemon on node2? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 2265

Self-heal Daemon on node3? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10416

Self-heal Daemon on node4? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10400

Self-heal Daemon on node5? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 18238

Task Status of Volume testvol

------------------------------------------------------------------------------

Task? ? ? ? ? ? ? ? : Rebalance

ID? ? ? ? ? ? ? ? ? : 8b3a04a0-0449-4424-a458-29f602571ea2

Status? ? ? ? ? ? ? : not started

替換之后的狀態:

[root@node1 gluster]# gluster volume status

Status of volume: testvol

Gluster process? ? ? ? ? ? ? ? ? ? ? ? ? ? TCP Port? RDMA Port? Online? Pid

------------------------------------------------------------------------------

Brick node1:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2085

Brick node2:/export/sdb1/brick? (過來了)? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 10208

Brick node1:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2076

Brick node2:/export/sdc1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 3474

Brick node3:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2197

Brick node4:/export/sdb1/brick? ? ? ? ? ? ? 49152? ? 0? ? ? ? ? Y? ? ? 2207

Brick node3:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2216

Brick node4:/export/sdc1/brick? ? ? ? ? ? ? 49153? ? 0? ? ? ? ? Y? ? ? 2226

Self-heal Daemon on localhost? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10684

Self-heal Daemon on node3? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10498

Self-heal Daemon on node5? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10075

Self-heal Daemon on node4? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10488

Self-heal Daemon on node2? ? ? ? ? ? ? ? ? N/A? ? ? N/A? ? ? ? Y? ? ? 10201

Task Status of Volume testvol

------------------------------------------------------------------------------

Task? ? ? ? ? ? ? ? : Rebalance

ID? ? ? ? ? ? ? ? ? : 8b3a04a0-0449-4424-a458-29f602571ea2

Status? ? ? ? ? ? ? : not started

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。

推薦閱讀更多精彩內容