rook-ceph创建rbd用于es的建立

kubectl apply -f storageclass.yaml storageclass.storage.k8s.io/rook-ceph-block created Error from server (InternalError): error when creating “storageclass.yaml”: Internal error occurred: failed calling webhook “cephblockpool-wh-rook-ceph-admission-controller-rook-ceph.rook.io”: failed to call webhook: Post “https://rook-ceph-admission-controller.rook-ceph.svc:443/validate-ceph-rook-io-v1-cephblockpool?timeout=5s": x509: certificate has expired or is not yet valid: current time 2023-07-27T05:20:27Z is after 2023-07-08T09:09:53Z [root@k-m1 rbd]# kubectl get nodes NAME STATUS ROLES AGE VERSION k-m1 Ready control-plane 285d v1.24.2 k-n1 Ready 285d v1.24.2 k-n2 Ready 285d v1.24.2 k-n3 Ready 285d v1.24.2 k-n4 Ready 176d v1.24.2 k-n5 Ready 176d v1.24.2 [root@k-m1 rbd]# x509: certificate has expired or is not yet valid -bash: x509:: command not found [root@k-m1 rbd]# date Thu Jul 27 13:22:06 CST 2023 [root@k-m1 rbd]# https://rook-ceph-admission-controller.rook-ceph.svc:443/validate-ceph-rook-io-v1-cephblockpool?timeout=5s"validate-ceph-rook-io-v1-cephblockpool?timeout=5s^C [root@k-m1 rbd]# kubectl apply -f storageclass.yaml cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block unchanged

解决办法:

重启 operator

https://github.com/rook/rook/issues/10719

rbd 如何扩容:

1
2
3
e2fsck -fy /dev/rbd0

resize2fs /dev/rbd0

最近一个项目的存储空间不够用,提示磁盘空间满了。

之前的解决方案是 SWARM 集群+CEPH 存储模式。

查阅相关资料,使用 rbd resize 命令来调整块大小。

https://docs.ceph.com/en/latest/rbd/rados-rbd-cmds/#resizing-a-block-device-image

根据命令执行相关操作

1
rbd resize production-db --size 30720

然后执行查询命令

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
rbd info production-db


rbd image 'production-db':
	size 30 GiB in 7680 objects
	order 22 (4 MiB objects)
	id: 847ba96b8b4567
	block_name_prefix: rbd_data.847ba96b8b4567
	format: 2
	features: layering
	op_features:
	flags:
	create_timestamp: Mon Jul 12 15:22:55 2021

感觉是扩容成功了哦!

注意如果是 rook-ceph,可以通过更新文件自动修改 rbd。

尝试挂载这个存储,发现实际可用空间还是之前的大小,什么原因呢??

原来在 Linux 上面对 ext4 文件系统进行扩缩容还需要执行相关指令(e2fsck 和 resize2fs)。

执行下面命令即可

1
2
3
4
5
6
7
# 挂载rbd
rbd map production-db
/dev/rbd2

# 执行resize2fs 和 2dfsck 指令,这个是关键!
e2fsck -fy /dev/rbd2
resize2fs /dev/rbd2

参考资料:

https://swamireddy.wordpress.com/2016/05/13/ceph-rbd-volumesimages-live-resize/

GET /_cluster/allocation/explain

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
{
  "note": "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
  "index": "test-bulk-example2",
  "shard": 0,
  "primary": true,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "ALLOCATION_FAILED",
    "at": "2023-07-28T10:56:19.714Z",
    "failed_allocation_attempts": 5,
    "details": """failed shard on node [zQ8ojRb6TTecAdT8-z1jzQ]: failed to create shard, failure java.io.IOException: failed to write state to the first location tmp file /usr/share/elasticsearch/data/indices/TShZhtBJQrSj5Aji8pS9lA/0/state-3.st.tmp
	at org.elasticsearch.gateway.MetadataStateFormat.writeStateToFirstLocation(MetadataStateFormat.java:111)
	at org.elasticsearch.gateway.MetadataStateFormat.write(MetadataStateFormat.java:256)
	at org.elasticsearch.gateway.MetadataStateFormat.writeAndCleanup(MetadataStateFormat.java:197)
	at org.elasticsearch.index.shard.IndexShard.persistMetadata(IndexShard.java:3242)
	at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:392)
	at org.elasticsearch.index.IndexService.createShard(IndexService.java:506)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:854)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:175)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:569)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShard(IndicesClusterStateService.java:508)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createIndicesAndUpdateShards(IndicesClusterStateService.java:493)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:226)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:538)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:524)
	at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:497)
	at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:428)
	at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:154)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:891)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:257)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:223)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1589)
Caused by: java.io.IOException: No space left on device
	at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
	at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:136)
	at sun.nio.ch.IOUtil.write(IOUtil.java:101)
	at sun.nio.ch.IOUtil.write(IOUtil.java:71)
	at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:306)
	at sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)
	at sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)
	at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:400)
	at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125)
	at java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:251)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:245)
	at org.apache.lucene.store.OutputStreamIndexOutput.getChecksum(OutputStreamIndexOutput.java:104)
	at org.apache.lucene.codecs.CodecUtil.writeCRC(CodecUtil.java:643)
	at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:411)
	at org.elasticsearch.gateway.MetadataStateFormat.doWriteToFirstLocation(MetadataStateFormat.java:132)
	at org.elasticsearch.gateway.MetadataStateFormat.writeStateToFirstLocation(MetadataStateFormat.java:103)
	... 22 more
	Suppressed: java.io.IOException: No space left on device
		at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
		at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
		at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:136)
		at sun.nio.ch.IOUtil.write(IOUtil.java:101)
		at sun.nio.ch.IOUtil.write(IOUtil.java:71)
		at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:306)
		at sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)
		at sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)
		at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:400)
		at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
		at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125)
		at java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:251)
		at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:245)
		at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:92)
		at org.elasticsearch.gateway.MetadataStateFormat.doWriteToFirstLocation(MetadataStateFormat.java:118)
		... 23 more
		Suppressed: java.io.IOException: No space left on device
			at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
			at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
			at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:136)
			at sun.nio.ch.IOUtil.write(IOUtil.java:101)
			at sun.nio.ch.IOUtil.write(IOUtil.java:71)
			at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:306)
			at sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)
			at sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)
			at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:400)
			at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
			at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125)
			at java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:251)
			at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:245)
			at java.io.FilterOutputStream.close(FilterOutputStream.java:182)
			at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:83)
			... 24 more
""",
    "last_allocation_status": "no"
  },
  "can_allocate": "no",
  "allocate_explanation": "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster that hold an in-sync copy of its data. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  "node_allocation_decisions": [
    {
      "node_id": "zQ8ojRb6TTecAdT8-z1jzQ",
      "node_name": "quickstart-es-data-nodes-0",
      "transport_address": "10.233.99.170:9300",
      "node_attributes": {
        "k8s_node_name": "k-n3",
        "xpack.installed": "true"
      },
      "node_decision": "no",
      "store": {
        "in_sync": true,
        "allocation_id": "LhEB8sh8RGuN68DcI2cDoA"
      },
      "deciders": [
        {
          "decider": "max_retry",
          "decision": "NO",
          "explanation": """shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed&metric=none] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2023-07-28T10:56:19.714Z], failed_attempts[5], failed_nodes[[zQ8ojRb6TTecAdT8-z1jzQ]], delayed=false, last_node[zQ8ojRb6TTecAdT8-z1jzQ], details[failed shard on node [zQ8ojRb6TTecAdT8-z1jzQ]: failed to create shard, failure java.io.IOException: failed to write state to the first location tmp file /usr/share/elasticsearch/data/indices/TShZhtBJQrSj5Aji8pS9lA/0/state-3.st.tmp
	at org.elasticsearch.gateway.MetadataStateFormat.writeStateToFirstLocation(MetadataStateFormat.java:111)
	at org.elasticsearch.gateway.MetadataStateFormat.write(MetadataStateFormat.java:256)
	at org.elasticsearch.gateway.MetadataStateFormat.writeAndCleanup(MetadataStateFormat.java:197)
	at org.elasticsearch.index.shard.IndexShard.persistMetadata(IndexShard.java:3242)
	at org.elasticsearch.index.shard.IndexShard.<init>(IndexShard.java:392)
	at org.elasticsearch.index.IndexService.createShard(IndexService.java:506)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:854)
	at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:175)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:569)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShard(IndicesClusterStateService.java:508)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.createIndicesAndUpdateShards(IndicesClusterStateService.java:493)
	at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:226)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:538)
	at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:524)
	at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:497)
	at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:428)
	at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:154)
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:891)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:257)
	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:223)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
	at java.lang.Thread.run(Thread.java:1589)
Caused by: java.io.IOException: No space left on device
	at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
	at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:136)
	at sun.nio.ch.IOUtil.write(IOUtil.java:101)
	at sun.nio.ch.IOUtil.write(IOUtil.java:71)
	at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:306)
	at sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)
	at sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)
	at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:400)
	at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125)
	at java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:251)
	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:245)
	at org.apache.lucene.store.OutputStreamIndexOutput.getChecksum(OutputStreamIndexOutput.java:104)
	at org.apache.lucene.codecs.CodecUtil.writeCRC(CodecUtil.java:643)
	at org.apache.lucene.codecs.CodecUtil.writeFooter(CodecUtil.java:411)
	at org.elasticsearch.gateway.MetadataStateFormat.doWriteToFirstLocation(MetadataStateFormat.java:132)
	at org.elasticsearch.gateway.MetadataStateFormat.writeStateToFirstLocation(MetadataStateFormat.java:103)
	... 22 more
	Suppressed: java.io.IOException: No space left on device
		at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
		at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
		at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:136)
		at sun.nio.ch.IOUtil.write(IOUtil.java:101)
		at sun.nio.ch.IOUtil.write(IOUtil.java:71)
		at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:306)
		at sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)
		at sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)
		at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:400)
		at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
		at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125)
		at java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:251)
		at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:245)
		at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:92)
		at org.elasticsearch.gateway.MetadataStateFormat.doWriteToFirstLocation(MetadataStateFormat.java:118)
		... 23 more
		Suppressed: java.io.IOException: No space left on device
			at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
			at sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
			at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:136)
			at sun.nio.ch.IOUtil.write(IOUtil.java:101)
			at sun.nio.ch.IOUtil.write(IOUtil.java:71)
			at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:306)
			at sun.nio.ch.ChannelOutputStream.writeFully(ChannelOutputStream.java:68)
			at sun.nio.ch.ChannelOutputStream.write(ChannelOutputStream.java:105)
			at org.apache.lucene.store.FSDirectory$FSIndexOutput$1.write(FSDirectory.java:400)
			at java.util.zip.CheckedOutputStream.write(CheckedOutputStream.java:73)
			at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:125)
			at java.io.BufferedOutputStream.implFlush(BufferedOutputStream.java:251)
			at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:245)
			at java.io.FilterOutputStream.close(FilterOutputStream.java:182)
			at org.apache.lucene.store.OutputStreamIndexOutput.close(OutputStreamIndexOutput.java:83)
			... 24 more
], allocation_status[deciders_no]]]"""
        }
      ]
    },
    {
      "node_id": "zTKlxrOsSXKLbpY_gUWeGA",
      "node_name": "quickstart-es-data-nodes-1",
      "transport_address": "10.233.100.128:9300",
      "node_attributes": {
        "k8s_node_name": "k-n2",
        "xpack.installed": "true"
      },
      "node_decision": "no",
      "store": {
        "in_sync": false,
        "allocation_id": "3WcH3UclT0uuaLzBA8GJTg"
      }
    }
  ]
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
GET /_cluster/health
{
  "cluster_name": "quickstart",
  "status": "red",
  "timed_out": false,
  "number_of_nodes": 3,
  "number_of_data_nodes": 2,
  "active_primary_shards": 28,
  "active_shards": 39,
  "relocating_shards": 0,
  "initializing_shards": 2,
  "unassigned_shards": 19,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 65
}

看到 share 的状态

GET _cat/shards?h=index,shard,prirep,state,unassigned.reason

可以查看到,解决办法,因为重试次数过期了,需要手动让其重新分配

即执行 POST /_cluster/reroute?retry_failed=true

即可触发

image-20230731105532293

这个时间很漫长,需要等待集群状态到 green

kiabana 如何存入数据到索引中

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
POST /test-bulk-example2/_doc
{
    "title": "hxfTitle 1689237412",
          "body": "Lorem ipsum dolor sit amet...",
          "published": "2023-10-04T04:05:52Z",
          "author": {
            "first_name": "Mary",
            "last_name": "Smith"
          }
}

4、出现 unassigned 分片后的症状?

head 插件查看会:Elasticsearch 启动 N 长时候后,某一个或几个分片仍持续为灰色。

5、unassigned 分片问题可能的原因?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
1)INDEX_CREATED:由于创建索引的API导致未分配。
2)CLUSTER_RECOVERED :由于完全集群恢复导致未分配。
3)INDEX_REOPENED :由于打开open或关闭close一个索引导致未分配。
4)DANGLING_INDEX_IMPORTED :由于导入dangling索引的结果导致未分配。
5)NEW_INDEX_RESTORED :由于恢复到新索引导致未分配。
6)EXISTING_INDEX_RESTORED :由于恢复到已关闭的索引导致未分配。
7)REPLICA_ADDED:由于显式添加副本分片导致未分配。
8)ALLOCATION_FAILED :由于分片分配失败导致未分配。
9)NODE_LEFT :由于承载该分片的节点离开集群导致未分配。
10)REINITIALIZED :由于当分片从开始移动到初始化时导致未分配(例如,使用影子shadow副本分片)。
11)REROUTE_CANCELLED :作为显式取消重新路由命令的结果取消分配。
12)REALLOCATED_REPLICA :确定更好的副本位置被标定使用,导致现有的副本分配被取消,出现未分配。

各种操作记录:

创建 Ceph 集群

{kube-node1,kube-node2,kube-node3} 此处应替换为当前集群节点的 node 名称。

  • 给节点打标签

为运行 ceph-mon 的节点打上:ceph-mon=enabled

1
kubectl label nodes {kube-node1,kube-node2,kube-node3} ceph-mon=enabled

为运行 ceph-osd 的节点,也就是存储节点,打上:ceph-osd=enabled

1
kubectl label nodes {kube-node1,kube-node2,kube-node3} ceph-osd=enabled

为运行 ceph-mgr 的节点,打上:ceph-mgr=enabled

ceph-mgr 最多只能运行 2 个

1
kubectl label nodes {kube-node1,kube-node2} ceph-mgr=enabled
  • 创建集群

创建前修改 storage.node字段中的节点名称及对应盘符

1
kubectl create -f cluster.yaml

验证 Ceph 集群

通过命令行查看以下 pod 启动,表示成功:

1
kubectl get po -n rook-ceph

ceph-block

https://rook.io/docs/rook/v1.8/ceph-block.html

1
2
3
4
5
6
7
8
9
[root@k8s-master03 ~]# kubectl  get storageclass
NAME            PROVISIONER         AGE
cephfs          ceph.com/cephfs     289d
rbd (default)   kubernetes.io/rbd   289d

[root@k8s-master03 ceph]# kubectl edit storageclasses.storage.k8s.io rbd

查看是否有如下字段
allowVolumeExpansion: true   #增加该字段表示允许动态扩容

扩大 pvc

1
2
3
4
5
6
7
8
kubectl edit pvc/grafana-pvc -n kube-system

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 11Gi
Licensed under CC BY-NC-SA 4.0
最后更新于 Jan 06, 2025 05:52 UTC
comments powered by Disqus
Built with Hugo
主题 StackJimmy 设计
Caret Up