node02节点再加入集群时提示

[preflight] Running pre-flight checks
error execution phase preflight: couldn’t validate the identity of the API Server: failed to request the cluster-info ConfigMap: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
To see the stack trace of this error execute with –v=5 or higher

k8s 版本:1.30

解决办法

  • 检查docker、cri-docker、kubelet是否正常,如果不正常则重置node02节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-node02 ~]# kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
W1121 17:50:27.710003 4901 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1121 17:50:35.975078 4901 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.



  • 清理配置文件
1
2
3
4
5
6
# 根据提示依次清理k8s配置文件夹,网络组件配置,iptables规则
rm -rf /etc/kubernetes
rm -rf /etc/cni
rm -rf /var/lib/etcd # 如未生成则无需清理
iptables -F
rm -rf ~/.kube
  • 在master节点再次生成token
1
[root@k8s-master ~]# kubeadm token create --print-join-command
  • 在node02再次加入集群即可
1
2
3
kubeadm join 192.168.1.13:6443 --token mvoxz2.0e6r09doqz4ilz28 \
--discovery-token-ca-cert-hash sha256:496c0b3dd0054507d26e0f1e0dcfc9b7fc5a3fa51dedca97ae4d5582b6db2612 \
--cri-socket=unix:///var/run/cri-dockerd.sock

如果仍然无法加入集群,则需检查各个节点的时间同步情况,如果主节点也出现时间同步问题,则需要重新需要reset整个集群,清理步骤同上