K8s切换masterip

修改 ip

1
2
3
4
5
6
7
8
9
  oldip=10.0.37.147
    newip=10.0.37.12
    cd /etc/kubernetes/
    find . -type f | xargs grep $oldip
    find . -type f | xargs sed -i "s/$oldip/$newip/"
    find . -type f | xargs grep $newip
   systemctl status kubelet
   systemctl status kubelet -l
   kubectl get nodes

重新生成证书

1
2
kubeadm config print init-defaults  > kubeadm-config.yaml
修改配置文件中地址

cat kubeadm-config.yml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.37.12     # 本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1        # 本主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.200.16:16443"    # 虚拟IP和haproxy端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io    # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.18.2     # k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
重新生成证书
kubeadm init phase certs etcd-healthcheck-client --config /etc/kubernetes/kubeadm-config.yaml
kubeadm init phase certs etcd-ca --config /etc/kubernetes/kubeadm-config.yaml
kubeadm init phase certs etcd-server --config /etc/kubernetes/kubeadm-config.yaml
kubeadm init phase certs etcd-peer --config /etc/kubernetes/kubeadm-config.yaml




kubeadm init phase certs all --config /etc/kubernetes/kubeadm-config.yaml
kubeadm init phase kubeconfig all --config /etc/kubernetes/kubeadm-config.yaml
kubeadm init phase control-plane all  --config /etc/kubernetes/kubeadm-config.yaml

systemctl restart kubelet

这个时候重启之后,发现可以 kubectl get nodes ,但是会发现一个奇怪的现象,就是节点 ip 还是旧的 ip,通过查看 kubelet 服务也会出现 node not found 的报错,很是奇怪,尝试了很多方法无法解决。

后面去解决 node 和主节点的连接的时候,通过修改 apiserver 的问价,尽然解决了这个问题。通过以下换证书之后,就出现了,正常的反应,很是惊奇。

添加 node 对应的证书

Kubernetes 学习(解决 x509 certificate is valid for xxx, not yyy)

1
2
vi apiserver.ext
subjectAltName = DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP:10.96.0.1, IP:10.0.37.12, DNS:apiserver.cluster.local#最后一hang添加的。
1
2
3
4
5
6
7
8
9
   # 查看已经合法的域名。
   openssl x509 -noout -text -in apiserver.crt
#生成密钥对
openssl genrsa -out apiserver.key 2048
#生成
 openssl req -new -key apiserver.key -subj "/CN=kube-apiserver," -out apiserver.csr

  openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 3650 -extfile apiserver.ext
  再检查一下是否有效

image-20220902150217702

其中 DNS:apiserver.cluster.local 就是我们采用的。

添加进来之后,成功展示

kubectl get nodes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# 获取所有的 kube-system 命名空间下面所有的 ConfigMap
➜ configmaps=$(kubectl -n kube-system get cm -o name | \
  awk '{print $1}' | \
  cut -d '/' -f 2)

# 获取所有的ConfigMap资源清单
➜ dir=$(mktemp -d)
➜ for cf in $configmaps; do
  kubectl -n kube-system get cm $cf -o yaml > $dir/$cf.yaml
done

# 找到所有包含旧 IP 的 ConfigMap
➜ grep -Hn $dir/* -e $oldip

# 然后编辑这些 ConfigMap,将旧 IP 替换成新的 IP
➜ kubectl -n kube-system edit cm kubeadm-config
➜ kubectl -n kube-system edit cm kube-proxy

image-20220902150328940

技巧记录:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

参考文档:

https://izsk.me/2021/01/20/Kubernetes-x509-not-ip/

Licensed under CC BY-NC-SA 4.0
最后更新于 Jan 06, 2025 05:52 UTC
comments powered by Disqus
Built with Hugo
主题 StackJimmy 设计
Caret Up