k8s学习目录4

官方文档

POD

证书可用时间修改

手动续签

查询 证书有效期

1
kubeadm certs check-expiration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@master1 ~]# kubeadm certs chec-expiration
invalid subcommand "chec-expiration"
See 'kubeadm certs -h' for help and examples
[root@master1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jan 04, 2026 07:16 UTC 364d ca no
apiserver Jan 04, 2026 07:16 UTC 364d ca no
apiserver-etcd-client Jan 04, 2026 07:16 UTC 364d etcd-ca no
apiserver-kubelet-client Jan 04, 2026 07:16 UTC 364d ca no
controller-manager.conf Jan 04, 2026 07:16 UTC 364d ca no
etcd-healthcheck-client Jan 04, 2026 07:16 UTC 364d etcd-ca no
etcd-peer Jan 04, 2026 07:16 UTC 364d etcd-ca no
etcd-server Jan 04, 2026 07:16 UTC 364d etcd-ca no
front-proxy-client Jan 04, 2026 07:16 UTC 364d front-proxy-ca no
scheduler.conf Jan 04, 2026 07:16 UTC 364d ca no

CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jan 02, 2035 07:16 UTC 9y no
etcd-ca Jan 02, 2035 07:16 UTC 9y no
front-proxy-ca Jan 02, 2035 07:16 UTC 9y no

证书续期

1
kubeadm certs renew al
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@master1 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'

certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed

Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@master1 ~]#
  • 需要重启 kube-apiserver, kube-controller-manager, kube-scheduler and etcd

编译kubeadm(了解)

安装go

1
2
3
tar -xvzf go1.21.7.linux-amd64.tar.gz -C /usr/local/

export PATH=$PATH:/usr/local/go/bin

下载K8S源码

1
2
3
yum install unzip -y
unzip kubernetes-1.28.2.zip
cd kubernetes-1.28.2

证书模板的文件在 cmd/kubeadm/app/util/pkiutil/pki_helpers.go
可以在这里改

也可以在它引用的常量配置文件中修改常量
cmd/kubeadm/app/constants/constants.go

修改完之后重新编译kube

1
2
export  KUBE_BUILD_PLATFORMS=linux/amd64
make WHAT=cmd/kubeadm GOFLAGS=-v

文档

高可用的K8S集群

前置条件

修改IP 三个节点

网卡位置文件样例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@lqz-test-demo ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="14449c29-9b36-4b3e-9182-79c6b5740780"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.16.200
GATEWAY=192.168.16.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
DNS2=8.8.8.8

修改完重启网络服务

1
systemctl restart network

修改yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
cat > /etc/yum.repos.d/MyRepo.repo < EOF
[k8s-repo]
name=k8s repo
baseurl=http://192.168.16.110:9080/k8s
enabled=1
gpgcheck=0
skip_if_unavailable=1

[docker-repo]
name=docker repo
baseurl=http://192.168.16.110:9080/docker-ce
enabled=1
gpgcheck=0
skip_if_unavailable=1

[openstack-repo]
name=openstack repo
baseurl=http://192.168.16.110:9080/openstack
enabled=1
gpgcheck=0
skip_if_unavailable=1

[other-repo]
name=other repo
baseurl=http://192.168.16.110:9080/other-rpm
enabled=1
gpgcheck=0
skip_if_unavailable=1

EOF

设置防火墙为 Iptables 并设置空规则

1
2
systemctl  stop firewalld  &&  systemctl  disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

关闭 SELINUX

1
2
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

设置系统主机名以及 Host 文件的相互解析

三台主机分别是 master1/master2/master3

1
hostnamectl  set-hostname  master1
1
2
3
4
5
6
cat >> /etc/hosts <<EOF
192.168.16.200 master1
192.168.16.201 master2
192.168.16.202 node1
192.168.16.100 master.k8s.io
EOF

调整内核参数,对于 K8S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl -p /etc/sysctl.d/kubernetes.conf

调整系统时区

1
2
3
4
5
6
7
# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond

关闭系统不需要服务

1
systemctl stop postfix && systemctl disable postfix

设置 rsyslogd 和 systemd journald

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent

# 压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间 10G
SystemMaxUse=10G

# 单日志文件最大 200M
SystemMaxFileSize=200M

# 日志保存时间 2 周
MaxRetentionSec=2week

# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald

升级内核

1
yum  install -y kernel-lt-5.4.278

修改默认启动内核

1
2
grub2-set-default "CentOS Linux (5.4.278-1.el7.elrepo.x86_64) 7 (Core)"

安装内核依赖的lib

1
yum install kernel-lt-devel-$(uname -r) kernel-lt-headers-$(uname -r)

开始安装 keepalived/haproxy

kube-proxy开启ipvs的前置条件

1
2
3
4
5
6
7
8
9
10
11
modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@master3 ~]# modprobe br_netfilter
[root@master3 ~]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack
> EOF
[root@master3 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 147456 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
[root@master3 ~]#

安装keepalive

在master节点安装

1
2
3
yum install conntrack-tools  libseccomp  libtool-ltdl -y
yum install keepalived -y

编辑keepalived的配置

master1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
! Configuration File for keepalived

global_defs {

router_id k8s

}

vrrp_script check_haproxy {

script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}

vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.16.100
}

track_script {
check_haproxy
}
}

master2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
! Configuration File for keepalived

global_defs {

router_id k8s

}

vrrp_script check_haproxy {

script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}

vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.16.100
}

track_script {
check_haproxy
}
}

启动keepalived

1
2
3
systemctl start keepalived.service
systemctl enable keepalived.service
systemctl status keepalived.service

master1 上面已经有了虚拟IP 192.168.16.100

1
2
3
4
5
6
7
8
[root@master1 ~]# ip a s ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:93:f9:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.16.200/24 brd 192.168.16.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.16.100/32 scope global ens33
valid_lft forever preferred_lft forever
[root@master1 ~]#

部署 haproxy

1
yum install  haproxy

修改配置文件

/etc/haproxy/haproxy.cfg

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2

chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000


#---------------------------------------------------------------------
# kubernetes-apiserver
#---------------------------------------------------------------------
frontend kubernetes-apiserver
mode tcp
bind *:16443
option tcplog
default_backend kubernetes-apiserver

#---------------------------------------------------------------------
# kubernetes-apiserver roundrobin backend
#---------------------------------------------------------------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01.k8s.io 192.168.16.200:6443 check
server master02.k8s.io 192.168.16.201:6443 check

listen stats
bind *:1080
stats auth admin:Password
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats

启动haproxy

1
2
3
systemctl enable haproxy
systemctl start haproxy
systemctl status haproxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master2 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)
Active: active (running) since 四 2025-01-02 22:45:31 CST; 1min 22s ago
Main PID: 2498 (haproxy-systemd)
Tasks: 3
Memory: 2.7M
CGroup: /system.slice/haproxy.service
├─2498 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
├─2499 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
└─2500 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds

1月 02 22:45:31 master2 systemd[1]: Started HAProxy Load Balancer.
1月 02 22:45:31 master2 haproxy-systemd-wrapper[2498]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
1月 02 22:45:31 master2 haproxy-systemd-wrapper[2498]: [WARNING] 001/224531 (2499) : config : 'option forwardfor' ignored for frontend 'kubernetes-apiserver' as it requires HTTP mode.
1月 02 22:45:31 master2 haproxy-systemd-wrapper[2498]: [WARNING] 001/224531 (2499) : config : 'option forwardfor' ignored for backend 'kubernetes-apiserver' as it requires HTTP mode.
[root@master2 ~]#

安装docker/k8s

安装docker

1
yum install -y docker-ce-23.0.3-1.el7

安装 cri-dockerd

1
rpm -Uvh http://192.168.16.110:9080/other/cri-dockerd-0.3.4-3.el7.x86_64.rpm

docker 仓库地址

1
2
3
4
5
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"insecure-registries": ["http://192.168.16.110:20080"]
}
EOF
1
2
3
4
5
vi /usr/lib/systemd/system/cri-docker.service

修改第10行 ExecStart=
改为
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=192.168.16.110:20080/k8s/pause:3.9

启动docker服务

1
2
3
4
5
6
7
8
# 重载系统守护进程
systemctl daemon-reload
# 启动docker
systemctl start docker
# 启动 cri-dockerd
systemctl start cri-docker.socket cri-docker
# 检查Docker组件状态
systemctl status docker cir-docker.socket cri-docker

设置开机启动

1
2
systemctl enable docker
systemctl enable cri-docker.socket cri-docker

安装k8s

1
yum install -y kubelet-1.28.2  kubeadm-1.28.2  kubectl-1.28.2

设置开机启动

1
systemctl enable kubelet.service

部署master节点

在具有VIP的master1节点操作

1
2
3
4
5
6
7
8
9
10
11
12
kubeadm init \
--kubernetes-version=v1.28.2 \
--node-name=master1 \
--image-repository=192.168.16.110:20080/k8s \
--cri-socket=unix:///var/run/cri-dockerd.sock \
--control-plane-endpoint=master.k8s.io:16443 \
--apiserver-advertise-address=192.168.16.200 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-cert-extra-sans=master.k8s.io,master1,master2,192.168.16.200,192.168.16.201,192.168.16.100 \
--upload-certs

展示如下信息表示k8s初始化成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
....
....

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join master.k8s.io:16443 --token 4nmojs.681xcvm8x54no8wy \
--discovery-token-ca-cert-hash sha256:862cfb85269a09ac949b6a360f36ec2e057b20afdc1afe5e14894385215a5939 \
--control-plane --certificate-key 85155ac067abc41658b596654bbb9b2177adac09f22ce06fdddf94b2a329b071

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join master.k8s.io:16443 --token 4nmojs.681xcvm8x54no8wy \
--discovery-token-ca-cert-hash sha256:862cfb85269a09ac949b6a360f36ec2e057b20afdc1afe5e14894385215a5939
[root@master1 ~]#


master1将集群配置文件放置在用户默认的目录

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

可以查看node与pod信息

1
2
3
4
5
6
7
8
9
10
11
[root@master1 ~]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-644df797c-bhkln 0/1 Pending 0 115s <none> <none> <none> <none>
kube-system coredns-644df797c-ldbq4 0/1 Pending 0 115s <none> <none> <none> <none>
kube-system etcd-master1 1/1 Running 0 2m10s 192.168.16.200 master1 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 0 2m10s 192.168.16.200 master1 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 1 (42s ago) 2m10s 192.168.16.200 master1 <none> <none>
kube-system kube-proxy-c9vnz 1/1 Running 0 115s 192.168.16.200 master1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 1 (37s ago) 2m10s 192.168.16.200 master1 <none> <none>
[root@master1 ~]#

将master2加入节点

1
2
3
4
kubeadm join master.k8s.io:16443 --token 4nmojs.681xcvm8x54no8wy \
--discovery-token-ca-cert-hash sha256:862cfb85269a09ac949b6a360f36ec2e057b20afdc1afe5e14894385215a5939 \
--control-plane --certificate-key 85155ac067abc41658b596654bbb9b2177adac09f22ce06fdddf94b2a329b071 \
--cri-socket=unix:///var/run/cri-dockerd.sock
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@master2 ~]#

master2将配置文件放的默认目录

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master2 ~]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-644df797c-bhkln 0/1 Pending 0 3m43s <none> <none> <none> <none>
kube-system coredns-644df797c-ldbq4 0/1 Pending 0 3m43s <none> <none> <none> <none>
kube-system etcd-master1 1/1 Running 0 3m58s 192.168.16.200 master1 <none> <none>
kube-system etcd-master2 1/1 Running 0 2m40s 192.168.16.201 master2 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 0 3m58s 192.168.16.200 master1 <none> <none>
kube-system kube-apiserver-master2 1/1 Running 0 2m41s 192.168.16.201 master2 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 1 (2m30s ago) 3m58s 192.168.16.200 master1 <none> <none>
kube-system kube-controller-manager-master2 1/1 Running 0 2m41s 192.168.16.201 master2 <none> <none>
kube-system kube-proxy-blvsc 1/1 Running 0 2m42s 192.168.16.201 master2 <none> <none>
kube-system kube-proxy-c9vnz 1/1 Running 0 3m43s 192.168.16.200 master1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 1 (2m25s ago) 3m58s 192.168.16.200 master1 <none> <none>
kube-system kube-scheduler-master2 1/1 Running 0 2m42s 192.168.16.201 master2 <none> <none>
[root@master2 ~]#

node1 节点加入集群

1
2
3
kubeadm join master.k8s.io:16443 --token 4nmojs.681xcvm8x54no8wy \
--discovery-token-ca-cert-hash sha256:862cfb85269a09ac949b6a360f36ec2e057b20afdc1afe5e14894385215a5939 \
--cri-socket=unix:///var/run/cri-dockerd.sock
1
2
3
4
5
6
7
8
9
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node1 ~]#

查询node /pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@master2 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane 7m20s v1.28.2
master2 NotReady control-plane 6m2s v1.28.2
node1 NotReady <none> 44s v1.28.2
[root@master2 ~]#
[root@master2 ~]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-644df797c-bhkln 0/1 Pending 0 6m35s <none> <none> <none> <none>
kube-system coredns-644df797c-ldbq4 0/1 Pending 0 6m35s <none> <none> <none> <none>
kube-system etcd-master1 1/1 Running 0 6m50s 192.168.16.200 master1 <none> <none>
kube-system etcd-master2 1/1 Running 0 5m32s 192.168.16.201 master2 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 0 6m50s 192.168.16.200 master1 <none> <none>
kube-system kube-apiserver-master2 1/1 Running 0 5m33s 192.168.16.201 master2 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 1 (5m22s ago) 6m50s 192.168.16.200 master1 <none> <none>
kube-system kube-controller-manager-master2 1/1 Running 0 5m33s 192.168.16.201 master2 <none> <none>
kube-system kube-proxy-blvsc 1/1 Running 0 5m34s 192.168.16.201 master2 <none> <none>
kube-system kube-proxy-c9vnz 1/1 Running 0 6m35s 192.168.16.200 master1 <none> <none>
kube-system kube-proxy-tkzrl 1/1 Running 0 16s 192.168.16.202 node1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 1 (5m17s ago) 6m50s 192.168.16.200 master1 <none> <none>
kube-system kube-scheduler-master2 1/1 Running 0 5m34s 192.168.16.201 master2 <none> <none>
[root@master2 ~]#

安装flannel 组件

1
kubectl apply -f  http://192.168.16.110:9080/other/kube-flannel.yml
1
2
3
4
5
6
7
8
[root@master1 ~]# kubectl apply -f  http://192.168.16.110:9080/other/kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master1 ~]#

查看 node已经 ready , pod中增加flannel 组件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@master2 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane 9m22s v1.28.2
master2 Ready control-plane 8m4s v1.28.2
node1 Ready <none> 2m46s v1.28.2
[root@master2 ~]#
[root@master2 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-68r4c 1/1 Running 0 55s
kube-flannel kube-flannel-ds-7qwjb 1/1 Running 0 55s
kube-flannel kube-flannel-ds-xg44f 1/1 Running 0 55s
kube-system coredns-644df797c-bhkln 1/1 Running 0 9m3s
kube-system coredns-644df797c-ldbq4 1/1 Running 0 9m3s
kube-system etcd-master1 1/1 Running 0 9m18s
kube-system etcd-master2 1/1 Running 0 8m
kube-system kube-apiserver-master1 1/1 Running 0 9m18s
kube-system kube-apiserver-master2 1/1 Running 0 8m1s
kube-system kube-controller-manager-master1 1/1 Running 2 (90s ago) 9m18s
kube-system kube-controller-manager-master2 1/1 Running 0 8m1s
kube-system kube-proxy-blvsc 1/1 Running 0 8m2s
kube-system kube-proxy-c9vnz 1/1 Running 0 9m3s
kube-system kube-proxy-tkzrl 1/1 Running 0 2m44s
kube-system kube-scheduler-master1 1/1 Running 2 (86s ago) 9m18s
kube-system kube-scheduler-master2 1/1 Running 0 8m2s