[root@master1 ~]# kubeadm certs chec-expiration invalid subcommand "chec-expiration" See 'kubeadm certs -h' for help and examples [root@master1 ~]# kubeadm certs check-expiration [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jan 04, 2026 07:16 UTC 364d ca no apiserver Jan 04, 2026 07:16 UTC 364d ca no apiserver-etcd-client Jan 04, 2026 07:16 UTC 364d etcd-ca no apiserver-kubelet-client Jan 04, 2026 07:16 UTC 364d ca no controller-manager.conf Jan 04, 2026 07:16 UTC 364d ca no etcd-healthcheck-client Jan 04, 2026 07:16 UTC 364d etcd-ca no etcd-peer Jan 04, 2026 07:16 UTC 364d etcd-ca no etcd-server Jan 04, 2026 07:16 UTC 364d etcd-ca no front-proxy-client Jan 04, 2026 07:16 UTC 364d front-proxy-ca no scheduler.conf Jan 04, 2026 07:16 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Jan 02, 2035 07:16 UTC 9y no etcd-ca Jan 02, 2035 07:16 UTC 9y no front-proxy-ca Jan 02, 2035 07:16 UTC 9y no
证书续期
1
kubeadm certs renew al
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
[root@master1 ~]# kubeadm certs renew all [renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. [root@master1 ~]#
需要重启 kube-apiserver, kube-controller-manager, kube-scheduler and etcd
编译kubeadm(了解)
安装go
1 2 3
tar -xvzf go1.21.7.linux-amd64.tar.gz -C /usr/local/
export PATH=$PATH:/usr/local/go/bin
下载K8S源码
1 2 3
yum install unzip -y unzip kubernetes-1.28.2.zip cd kubernetes-1.28.2
systemctl start keepalived.service systemctl enable keepalived.service systemctl status keepalived.service
master1 上面已经有了虚拟IP 192.168.16.100
1 2 3 4 5 6 7 8
[root@master1 ~]# ip a s ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:93:f9:83 brd ff:ff:ff:ff:ff:ff inet 192.168.16.200/24 brd 192.168.16.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.16.100/32 scope global ens33 valid_lft forever preferred_lft forever [root@master1 ~]#
#--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #---------------------------------------------------------------------
#--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2
chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon
# turn on stats unix socket stats socket /var/lib/haproxy/stats
#--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@master1 ~]# kubectl apply -f http://192.168.16.110:9080/other/kube-flannel.yml namespace/kube-flannel created serviceaccount/flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created [root@master1 ~]#