NFS访问失败问题解决,k8s1.24

今天升级Kubernetes1.24,发现PVC创建时一直处于pending状态,经过检查发现,和之前遇到的问题一样,参考:升级 Kubernetes V1.20 后,pvc 无法创建问题解决

然后检查/etc/kubernetes/manifests/kube-apiserver.yaml文件,之前设置的RemoveSelfLink=false,在升级后的确没有了,按照之前的方法增加,等待API Server重启。

然而,kube-apiserver启动不了了。

检查kube-apiserver,发现RemoveSelfLink=false在新的版本中已经不允许使用了。经过检查和验证,解决办法是更换nfs-client-provisioner

需要将nfs-client-provisioner更换为nfs-subdir-external-provisioner,我使用的是helm部署,更换方法如下:

  1. 先增加helm

    1
    2
    
     helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
     helm repo update
    
  2. 删除原来的部署

    1
    
     helm uninstall nfs-prod
    
  3. 修改value文件,新的文件如下

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
image:
  repository: registry.cn-shanghai.aliyuncs.com/c7n/nfs-subdir-external-provisioner
storageClass:
  name: <name>
  archiveOnDelete: false
  defaultClass: true
nfs:
  server: <nfs-ip>
  path: <nfs-path>
nodeSelector: {}
  1. 重新部署
1
helm install <name> nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -f <value file>

镜像名字为:

1
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0

部署 nfs-class 文件如下:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
[root@k-m1 nfs]# ll
total 12
-rw-r--r--. 1 root root  154 Jan 14  2022 class.yaml
-rw-r--r--. 1 root root  981 Apr 12  2022 deployment.yaml
-rw-r--r--. 1 root root 1505 Jan 14  2022 rbac.yaml
[root@k-m1 nfs]# vi class.yaml
[root@k-m1 nfs]# vi deployment.yaml
[root@k-m1 nfs]# vi rbac.yaml
[root@k-m1 nfs]# ll
total 12
-rw-r--r--. 1 root root  154 Jan 14  2022 class.yaml
-rw-r--r--. 1 root root 1004 Oct 17 11:23 deployment.yaml
-rw-r--r--. 1 root root 1505 Oct 17 11:23 rbac.yaml
[root@k-m1 nfs]# tree
.
├── class.yaml
├── deployment.yaml
└── rbac.yaml

0 directories, 3 files
[root@k-m1 nfs]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"
[root@k-m1 nfs]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.7.20.26
            - name: NFS_PATH
              value: /data/nfs-share
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.7.20.26
            path: /data/nfs-share
[root@k-m1 nfs]# cat rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: nacos
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nacos
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
[root@k-m1 nfs]#

测试正常。

参考文档: https://qiaolb.github.io/k8sv-nfs.html

http://www.mydlq.club/article/109/

Licensed under CC BY-NC-SA 4.0
最后更新于 Jan 06, 2025 05:52 UTC
comments powered by Disqus
Built with Hugo
主题 StackJimmy 设计
Caret Up