rook-ceph扩容

rbd 扩容 可以看到 pool 所有的 image

1
2
3
4
5
bash-4.4$ rbd -p replicapool ls
csi-vol-9af80776-c006-11ee-a4cc-06a165892aff
csi-vol-9af80a68-c006-11ee-a4cc-06a165892aff
csi-vol-9af80a7e-c006-11ee-a4cc-06a165892aff
bash-4.4$

扩容命令如下:

1
2
3
4
5
6
7
kubectl exec -it -n rook-ceph rook-ceph-tools-b8c679f95-llvmp sh
sh-4.2# rbd -p replicapool ls
pvc-586fd31d-75ed-11e9-a901-26def9e195d0
sh-4.2# rbd -p   status
rbd: image name was not specified
sh-4.2# rbd -p replicapool resize --size 3096 pvc-586fd31d-75ed-11e9-a901-26def9e195d0
Resizing image: 100% complete...done.

解释如下: rbd -p replicapool resize –size 3096 单位是 mb pvc-586fd31d-75ed-11e9-a901-26def9e195d0 这个是 image 的名字 接下来的操作是进入到 rbd-plugin 进入到节点后刷新设备 image.png ![[image-20240816142712085.png]]

在 csi-rbdplugin 的 pod 中执行如下命令

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
## 如果分区格式是ext3、ext4,使用resize2fs 刷新分区
resize2fs /dev/rbd0
## 如果分区格式是xfs,使用resize2fs 会报以下错
resize2fs 1.44.1 (24-Mar-2018)
resize2fs: Bad magic number in super-block while trying to open /dev/rbd0
Couldn't find valid filesystem superblock.

## 使用xfs_growfs 刷新块设备
xfs_growfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=9, agsize=31744 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 262144 to 792576

注意清理 rook-ceph

1
2
3
kubectl delete -f cluster.yaml
kubectl delete -f operator.yaml
kubectl delete -f common.yaml

在各个节点删除 rook 数据目录:

1
rm -fr /var/lib/rook/*

重置硬盘

1
2
3
4
5
6
7
#!/usr/bin/env bash
DISK="/dev/sdb"
sgdisk --zap-all $DISK
ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %
rm -rf /dev/ceph-*

DISK="/dev/xvdf" && sgdisk --zap-all $DISK && ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove %;rm -rf /dev/ceph-*

其他命令记录

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
# replicapool 扩容
ceph osd pool set replicapool pg_num 200
# replicapool 删除
ceph osd pool  delete replicapool replicapool
# 创建image 100M
rbd create replicapool/test --size 100
#  Disable the rbd features that are not in the kernel module
rbd feature disable replicapool/test fast-diff deep-flatten object-map
# 映射image 到内核
rbd map replicapool/test
/dev/rbd8
# 格式化
mkfs.ext4 -m0 /dev/rbd8
# 挂载
mount /dev/rbd0 /mount
# 卸载
umount /mount
rbd unmap /dev/rbd8

# image 导入导出
rbd export replicapool/test test
rbd import test replicapool/test1
Licensed under CC BY-NC-SA 4.0
最后更新于 Jan 06, 2025 05:52 UTC
comments powered by Disqus
Built with Hugo
主题 StackJimmy 设计
Caret Up