k8s内网yum源搭建过程

可以联通外网的机器增加docker-ce仓库(使用阿里云仓库)

安装yum工具

1
2
3
yum install -y yum-utils createrepo
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

使用阿里云仓库

1
2
3
4
5
6
7
8
9
[root@lqz-test-demo yum.repos.d]# cat docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
module_hotfixes=true
[root@lqz-test-demo yum.repos.d]#

增加docker镜像仓库 并启动

1
2
3
4
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --enable docker-ce-edge
# 替换为清华源
sed -i 's+https://download.docker.com+https://mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo

k8s与docker的依赖关系如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Kubernetes 1.24.6+ -->Docker版本remove the dependency on Docker!!! With the dockershim removal, core Kubernetes no longer has to track the latest validated version of Docker.Dockershim Removed from kubelet.After its deprecation in v1.20, the dockershim component has been removed from the kubelet. From v1.24 onwards, you will need to either use one of the other supported runtimes (such as containerd or CRI-O) or use cri-dockerd if you are relying on Docker Engine as your container runtime.
Kubernetes 1.23.12 -->Docker版本The Kubelet now supports the CRI v1 API, which is now the project-wide default. If a container runtime does not support the v1 API, Kubernetes will fall back to the v1alpha2 implementation. There is no intermediate action required by end-users, because v1 and v1alpha2 do not differ in their implementation. It is likely that v1alpha2 will be removed in one of the future Kubernetes releases to be able to develop v1.
Kubernetes 1.22.15 -->Docker版本remove the automatic detection and matching of cgroup drivers for Docker. For new clusters if you have not configured the cgroup driver explicitly you might get a failure in the kubelet on driver mismatch (kubeadm clusters should be using the "systemd" driver).Add unified map on CRI to support cgroup v2. Refer to https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#unified.
Kubernetes 1.21.14 -->Docker版本Update the latest validated version of Docker to 20.10. Official support to build kubernetes with docker-machine / remote docker is removed. This change does not affect building kubernetes with docker locally
Kubernetes 1.20.15 -->Docker版本Docker as an underlying runtime is being deprecated. Docker-produced images will continue to work in your cluster with all runtimes, as they always have.
Kubernetes 1.19.16 -->Docker版本Update opencontainers/runtime-spec dependency to v1.0.2
Kubernetes 1.18.20 -->Docker版本【?】
Kubernetes 1.17.17 -->Docker版本Update the latest validated version of Docker to 19.03
Kubernetes 1.16.15 -->Docker版本1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
Kubernetes 1.15.12 -->Docker版本1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
Kubernetes 1.14.10 -->Docker版本1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
Kubernetes 1.13.12 -->Docker版本1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06
Kubernetes 1.12.10 -->Docker版本1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06
Kubernetes 1.11.10 -->Docker版本1.11.2 to 1.13.1 and 17.03.x
Kubernetes 1.10.13 -->Docker版本1.11.2 to 1.13.1 and 17.03.x
Kubernetes 1.9.1 -->Docker版本1.11.2 to 1.13.1 and 17.03.x
Kubernetes 1.8.15 -->Docker版本1.11.2, 1.12.6, 1.13.1, and 17.03.2. (Has knowned problem).
Kubernetes 1.7.16 -->Docker版本1.10.3, 1.11.2, 1.12.6. (Has knowned problem).
Kubernetes 1.6.13 -->Docker版本1.10.3, 1.11.2, 1.12.6. Drop the support for docker 1.9.x.
Kubernetes 1.5.8 -->Docker版本1.10.3 - 1.12.3.

k8s 1.24版本后需要使用cri-dockerd和docker通信
根据docker与k8s的依赖关系
如果使用最新的版本的docker 需要下载 cri-docker

通过yum下docker-ce并建立本地yum仓库

1
2
3
4
5
# 最新的版本
yumdownloader --resolve docker-ce-23.0.3-1.el7 --destdir /mnt/docker-ce
# 指定历史版本(简易)
# yumdownloader --resolve docker-ce-20.10.7.ce-3.el7 --destdir /mnt/docker-ce
createrepo -d /mnt/docker-ce

可以联通外网的机器增加k8s仓库(使用阿里云仓库)

添加aliyun提供的k8s下载镜像

1
2
3
4
5
6
7
8
9
10
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

通过yum下载k8s 并建立本地yum仓库

1
2
3
4
yumdownloader --resolve kubeadm-1.28.2 --destdir /mnt/k8s
#指定历史版本(简易)
#yumdownloader --resolve kubeadm-1.21.14 --destdir /mnt/k8s
createrepo -d /mnt/k8s
  • k8s 1.24版本后需要使用cri-dockerd和docker通信
    1
    2
    wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el7.x86_64.rpm
    mv cri-dockerd-0.3.4-3.el7.x86_64.rpm /mnt/docker-ce

安装nginx配置yum网络服务

安装略

nginx配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
server {
listen 9080;
server_name 192.168.16.130;
server_name yum.serve;
server_name public.serve;
server_name package.wcj.me;
access_log /usr/local/nginx/logs/share.access.log ;
error_log /usr/local/nginx/logs/share.error.log;

location / {
root /mnt;
autoindex on;
autoindex_exact_size on;
autoindex_localtime on;
}
}

客户端主机配置内网的yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@lqz-test-demo yum.repos.d]# cat MyRepo.repo 
[k8s-repo]
name=k8s repo
baseurl=http://192.168.16.130:9080/k8s
enabled=1
gpgcheck=0
skip_if_unavailable=1

[docker-repo]
name=docker repo
baseurl=http://192.168.16.130:9080/docker-ce
enabled=1
gpgcheck=0
skip_if_unavailable=1

[root@lqz-test-demo yum.repos.d]#

通过内网yum源安装docker

安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
[root@lqz-test-demo yum.repos.d]# yum install -y docker-ce
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
docker-repo | 2.9 kB 00:00:00
k8s-repo | 2.9 kB 00:00:00
(1/2): docker-repo/primary_db | 13 kB 00:00:00
(2/2): k8s-repo/primary_db | 7.7 kB 00:00:00
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.3.26.1.4-1.el7 将被 安装
--> 正在处理依赖关系 containerd.io >= 1.6.24,它被软件包 3:docker-ce-26.1.4-1.el7.x86_64 需要
--> 正在处理依赖关系 container-selinux >= 2:2.74,它被软件包 3:docker-ce-26.1.4-1.el7.x86_64 需要
--> 正在处理依赖关系 libcgroup,它被软件包 3:docker-ce-26.1.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-rootless-extras,它被软件包 3:docker-ce-26.1.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-ce-cli,它被软件包 3:docker-ce-26.1.4-1.el7.x86_64 需要
--> 正在检查事务
---> 软件包 container-selinux.noarch.2.2.119.2-1.911c772.el7_8 将被 安装
--> 正在处理依赖关系 policycoreutils-python,它被软件包 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 需要
---> 软件包 containerd.io.x86_64.0.1.6.33-3.1.el7 将被 安装
---> 软件包 docker-ce-cli.x86_64.1.26.1.4-1.el7 将被 安装
--> 正在处理依赖关系 docker-compose-plugin,它被软件包 1:docker-ce-cli-26.1.4-1.el7.x86_64 需要
--> 正在处理依赖关系 docker-buildx-plugin,它被软件包 1:docker-ce-cli-26.1.4-1.el7.x86_64 需要
---> 软件包 docker-ce-rootless-extras.x86_64.0.26.1.4-1.el7 将被 安装
--> 正在处理依赖关系 slirp4netns >= 0.4,它被软件包 docker-ce-rootless-extras-26.1.4-1.el7.x86_64 需要
--> 正在处理依赖关系 fuse-overlayfs >= 0.7,它被软件包 docker-ce-rootless-extras-26.1.4-1.el7.x86_64 需要
---> 软件包 libcgroup.x86_64.0.0.41-21.el7 将被 安装
--> 正在检查事务
---> 软件包 docker-buildx-plugin.x86_64.0.0.14.1-1.el7 将被 安装
---> 软件包 docker-compose-plugin.x86_64.0.2.27.1-1.el7 将被 安装
---> 软件包 fuse-overlayfs.x86_64.0.0.7.2-6.el7_8 将被 安装
--> 正在处理依赖关系 libfuse3.so.3(FUSE_3.2)(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
--> 正在处理依赖关系 libfuse3.so.3(FUSE_3.0)(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
--> 正在处理依赖关系 libfuse3.so.3()(64bit),它被软件包 fuse-overlayfs-0.7.2-6.el7_8.x86_64 需要
---> 软件包 policycoreutils-python.x86_64.0.2.5-34.el7 将被 安装
--> 正在处理依赖关系 setools-libs >= 3.3.8-4,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libsemanage-python >= 2.5-14,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 audit-libs-python >= 2.1.3-4,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 python-IPy,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1(VERS_1.4)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1(VERS_1.2)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libapol.so.4(VERS_4.0)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 checkpolicy,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1()(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libapol.so.4()(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
---> 软件包 slirp4netns.x86_64.0.0.4.3-4.el7_8 将被 安装
--> 正在检查事务
---> 软件包 audit-libs-python.x86_64.0.2.8.5-4.el7 将被 安装
---> 软件包 checkpolicy.x86_64.0.2.5-8.el7 将被 安装
---> 软件包 fuse3-libs.x86_64.0.3.6.1-4.el7 将被 安装
---> 软件包 libsemanage-python.x86_64.0.2.5-14.el7 将被 安装
---> 软件包 python-IPy.noarch.0.0.75-6.el7 将被 安装
---> 软件包 setools-libs.x86_64.0.3.3.8-4.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

==========================================================================================================================================================================================================================================================================================================================================================================================
Package 架构 版本 源 大小
==========================================================================================================================================================================================================================================================================================================================================================================================
正在安装:
docker-ce x86_64 3:26.1.4-1.el7 docker-repo 27 M
为依赖而安装:
audit-libs-python x86_64 2.8.5-4.el7 docker-repo 76 k
checkpolicy x86_64 2.5-8.el7 docker-repo 295 k
container-selinux noarch 2:2.119.2-1.911c772.el7_8 docker-repo 40 k
containerd.io x86_64 1.6.33-3.1.el7 docker-repo 35 M
docker-buildx-plugin x86_64 0.14.1-1.el7 docker-repo 14 M
docker-ce-cli x86_64 1:26.1.4-1.el7 docker-repo 15 M
docker-ce-rootless-extras x86_64 26.1.4-1.el7 docker-repo 9.4 M
docker-compose-plugin x86_64 2.27.1-1.el7 docker-repo 13 M
fuse-overlayfs x86_64 0.7.2-6.el7_8 docker-repo 54 k
fuse3-libs x86_64 3.6.1-4.el7 docker-repo 82 k
libcgroup x86_64 0.41-21.el7 docker-repo 66 k
libsemanage-python x86_64 2.5-14.el7 docker-repo 113 k
policycoreutils-python x86_64 2.5-34.el7 docker-repo 457 k
python-IPy noarch 0.75-6.el7 docker-repo 32 k
setools-libs x86_64 3.3.8-4.el7 docker-repo 620 k
slirp4netns x86_64 0.4.3-4.el7_8 docker-repo 81 k

事务概要
==========================================================================================================================================================================================================================================================================================================================================================================================
安装 1 软件包 (+16 依赖软件包)

总下载量:116 M
安装大小:407 M
Downloading packages:
(1/17): audit-libs-python-2.8.5-4.el7.x86_64.rpm | 76 kB 00:00:00
(2/17): checkpolicy-2.5-8.el7.x86_64.rpm | 295 kB 00:00:00
(3/17): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm | 40 kB 00:00:00
(4/17): docker-buildx-plugin-0.14.1-1.el7.x86_64.rpm | 14 MB 00:00:00
(5/17): docker-ce-26.1.4-1.el7.x86_64.rpm | 27 MB 00:00:00
(6/17): containerd.io-1.6.33-3.1.el7.x86_64.rpm | 35 MB 00:00:01
(7/17): docker-ce-cli-26.1.4-1.el7.x86_64.rpm | 15 MB 00:00:00
(8/17): docker-ce-rootless-extras-26.1.4-1.el7.x86_64.rpm | 9.4 MB 00:00:00
(9/17): fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm | 54 kB 00:00:00
(10/17): docker-compose-plugin-2.27.1-1.el7.x86_64.rpm | 13 MB 00:00:00
(11/17): fuse3-libs-3.6.1-4.el7.x86_64.rpm | 82 kB 00:00:00
(12/17): libcgroup-0.41-21.el7.x86_64.rpm | 66 kB 00:00:00
(13/17): libsemanage-python-2.5-14.el7.x86_64.rpm | 113 kB 00:00:00
(14/17): python-IPy-0.75-6.el7.noarch.rpm | 32 kB 00:00:00
(15/17): policycoreutils-python-2.5-34.el7.x86_64.rpm | 457 kB 00:00:00
(16/17): slirp4netns-0.4.3-4.el7_8.x86_64.rpm | 81 kB 00:00:00
(17/17): setools-libs-3.3.8-4.el7.x86_64.rpm | 620 kB 00:00:00
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计 52 MB/s | 116 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : libcgroup-0.41-21.el7.x86_64 1/17
正在安装 : setools-libs-3.3.8-4.el7.x86_64 2/17
正在安装 : audit-libs-python-2.8.5-4.el7.x86_64 3/17
正在安装 : fuse3-libs-3.6.1-4.el7.x86_64 4/17
正在安装 : fuse-overlayfs-0.7.2-6.el7_8.x86_64 5/17
正在安装 : slirp4netns-0.4.3-4.el7_8.x86_64 6/17
正在安装 : libsemanage-python-2.5-14.el7.x86_64 7/17
正在安装 : python-IPy-0.75-6.el7.noarch 8/17
正在安装 : docker-buildx-plugin-0.14.1-1.el7.x86_64 9/17
正在安装 : checkpolicy-2.5-8.el7.x86_64 10/17
正在安装 : policycoreutils-python-2.5-34.el7.x86_64 11/17
正在安装 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 12/17
setsebool: SELinux is disabled.
正在安装 : containerd.io-1.6.33-3.1.el7.x86_64 13/17
正在安装 : docker-compose-plugin-2.27.1-1.el7.x86_64 14/17
正在安装 : 1:docker-ce-cli-26.1.4-1.el7.x86_64 15/17
正在安装 : docker-ce-rootless-extras-26.1.4-1.el7.x86_64 16/17
正在安装 : 3:docker-ce-26.1.4-1.el7.x86_64 17/17
验证中 : docker-compose-plugin-2.27.1-1.el7.x86_64 1/17
验证中 : checkpolicy-2.5-8.el7.x86_64 2/17
验证中 : docker-buildx-plugin-0.14.1-1.el7.x86_64 3/17
验证中 : python-IPy-0.75-6.el7.noarch 4/17
验证中 : fuse-overlayfs-0.7.2-6.el7_8.x86_64 5/17
验证中 : libsemanage-python-2.5-14.el7.x86_64 6/17
验证中 : slirp4netns-0.4.3-4.el7_8.x86_64 7/17
验证中 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 8/17
验证中 : containerd.io-1.6.33-3.1.el7.x86_64 9/17
验证中 : 3:docker-ce-26.1.4-1.el7.x86_64 10/17
验证中 : policycoreutils-python-2.5-34.el7.x86_64 11/17
验证中 : docker-ce-rootless-extras-26.1.4-1.el7.x86_64 12/17
验证中 : fuse3-libs-3.6.1-4.el7.x86_64 13/17
验证中 : audit-libs-python-2.8.5-4.el7.x86_64 14/17
验证中 : setools-libs-3.3.8-4.el7.x86_64 15/17
验证中 : 1:docker-ce-cli-26.1.4-1.el7.x86_64 16/17
验证中 : libcgroup-0.41-21.el7.x86_64 17/17

已安装:
docker-ce.x86_64 3:26.1.4-1.el7

作为依赖被安装:
audit-libs-python.x86_64 0:2.8.5-4.el7 checkpolicy.x86_64 0:2.5-8.el7 container-selinux.noarch 2:2.119.2-1.911c772.el7_8 containerd.io.x86_64 0:1.6.33-3.1.el7 docker-buildx-plugin.x86_64 0:0.14.1-1.el7 docker-ce-cli.x86_64 1:26.1.4-1.el7 docker-ce-rootless-extras.x86_64 0:26.1.4-1.el7 docker-compose-plugin.x86_64 0:2.27.1-1.el7
fuse-overlayfs.x86_64 0:0.7.2-6.el7_8 fuse3-libs.x86_64 0:3.6.1-4.el7 libcgroup.x86_64 0:0.41-21.el7 libsemanage-python.x86_64 0:2.5-14.el7 policycoreutils-python.x86_64 0:2.5-34.el7 python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-4.el7 slirp4netns.x86_64 0:0.4.3-4.el7_8

完毕!

这是docker的镜像仓库地址

1
2
3
4
5
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://c12xt3od.mirror.aliyuncs.com"]
}
EOF

安装cri-dockerd

1
2
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el7.x86_64.rpm
rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm

修改cri-dockerd的服务文件

1
2
3
4
vi /usr/lib/systemd/system/cri-docker.service

修改第10行 ExecStart=
改为 ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7

启动docker服务

1
2
3
4
5
6
7
8
# 重载系统守护进程
systemctl daemon-reload
# 启动docker
systemctl start docker
# 启动 cri-dockerd
systemctl start cri-docker.socket cri-docker
# 检查Docker组件状态
systemctl status docker cir-docker.socket cri-docker
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@lqz-test-demo pkg]# systemctl status docker cir-docker.socket cri-docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: active (running) since 六 2024-11-09 18:26:02 CST; 18s ago
Docs: https://docs.docker.com
Main PID: 5604 (dockerd)
Tasks: 8
Memory: 30.7M
CGroup: /system.slice/docker.service
└─5604 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

11月 09 18:26:01 lqz-test-demo systemd[1]: Starting Docker Application Container Engine...
11月 09 18:26:01 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:01.811857230+08:00" level=info msg="Starting up"
11月 09 18:26:01 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:01.912240547+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
11月 09 18:26:01 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:01.912362073+08:00" level=info msg="Loading containers: start."
11月 09 18:26:02 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:02.085326012+08:00" level=info msg="Default bridge (docker0) is assigned with an IP... address"
11月 09 18:26:02 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:02.108110919+08:00" level=info msg="Loading containers: done."
11月 09 18:26:02 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:02.153201346+08:00" level=info msg="Docker daemon" commit=de5c9cf containerd-snapsh...on=26.1.4
11月 09 18:26:02 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:02.153251449+08:00" level=info msg="Daemon has completed initialization"
11月 09 18:26:02 lqz-test-demo dockerd[5604]: time="2024-11-09T18:26:02.171517475+08:00" level=info msg="API listen on /run/docker.sock"
11月 09 18:26:02 lqz-test-demo systemd[1]: Started Docker Application Container Engine.
Unit cir-docker.socket could not be found.

● cri-docker.service - CRI Interface for Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; disabled; vendor preset: disabled)
Active: active (running) since 六 2024-11-09 18:21:26 CST; 4min 53s ago
Docs: https://docs.mirantis.com
Main PID: 5274 (cri-dockerd)
Tasks: 7
Memory: 13.8M
CGroup: /system.slice/cri-docker.service
└─5274 /usr/bin/cri-dockerd --container-runtime-endpoint fd://

11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Hairpin mode is set to none"
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Loaded network plugin cni"
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Docker cri networking managed by network plugin cni"
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Docker Info: &{ID:fa96dcad-6ef9-462e-980d-7a3906d9de9...] [Native
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Setting cgroupDriver cgroupfs"
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Docker cri received runtime config &RuntimeConfig{Net...idr:,},}"
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
11月 09 18:21:26 lqz-test-demo cri-dockerd[5274]: time="2024-11-09T18:21:26+08:00" level=info msg="Start cri-dockerd grpc backend"
11月 09 18:21:26 lqz-test-demo systemd[1]: Started CRI Interface for Docker Application Container Engine.
11月 09 18:21:51 lqz-test-demo systemd[1]: Current command vanished from the unit file, execution of the command list won't be resumed.
Hint: Some lines were ellipsized, use -l to show in full.

设置自启动

1
sudo systemctl enable docker cri-docker.socket cri-docker

通过内网yum源安装k8s

1
2
3
4
5
6
# node1 节点执行
sudo hostnamectl set-hostname node1
# node2 节点执行
sudo hostnamectl set-hostname node2
# master 节点执行
sudo hostnamectl set-hostname master
1
2
3
4
5
cat >> /etc/hosts << EOF
192.168.16.200 master
192.168.16.201 node1
192.168.16.202 node2
EOF

安装k8s(master节点与node的节点凑操作)

1
yum install -y kubelet-1.28.2  kubeadm-1.28.2  kubectl-1.28.2

设置自启动

1
systemctl enable kubelet.service

初始化集群(主节点操作)

–image-repository 可以指向内网的仓库地址

1
2
3
4
5
6
kubeadm init   --kubernetes-version=v1.28.2 --node-name=master --image-repository=registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=192.168.16.200 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

# kubeadm init --kubernetes-version=v1.28.2 --node-name=centos7-container1 --image-repository=registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=172.0.1.4 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12

# 如果集群重新初始化需要重置之后再初始化
# kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
[root@lqz-test-demo manifests]# kubeadm init   --kubernetes-version=v1.28.2 --node-name=master --image-repository=registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=192.168.16.200 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] Using Kubernetes version: v1.28.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1109 18:57:27.300778 8553 checks.go:835] detected that the sandbox image "registry.aliyuncs.com/google_containers/pause:3.7" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.16.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.16.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.16.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.001834 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 0iszyq.xf2g3oa5do0o3gry
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.16.200:6443 --token 0iszyq.xf2g3oa5do0o3gry \
--discovery-token-ca-cert-hash sha256:6a3e35ebab27d38d61a930657ec9cc8ffce11630c3d10f819a0c1aed2a6adf83

配置操作k8s集群的配置文件(master 节点操作)

1
2
3
4
5
6
7
8
9
# 查看是否有如下两个文件
# admin.conf manifests
ls /etc/kubernetes/

# 将admin.conf加入环境变量,直接使用永久生效
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

# 加载
source ~/.bash_profile

node节点操作加入集群(两个node节点操作)

注意其中token使用需要变更成各自日志中打印的信息 并增加参数 –cri-socket

1
2
3
4
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.16.200:6443 --token 0iszyq.xf2g3oa5do0o3gry \
--discovery-token-ca-cert-hash sha256:6a3e35ebab27d38d61a930657ec9cc8ffce11630c3d10f819a0c1aed2a6adf83
1
kubeadm join 192.168.16.200:6443 --token 0iszyq.xf2g3oa5do0o3gry --discovery-token-ca-cert-hash sha256:6a3e35ebab27d38d61a930657ec9cc8ffce11630c3d10f819a0c1aed2a6adf83 --cri-socket unix:///var/run/cri-dockerd.sock

显示如下信息从节点增加成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@node1 ~]# kubeadm join 192.168.16.200:6443 --token 0iszyq.xf2g3oa5do0o3gry --discovery-token-ca-cert-hash sha256:6a3e35ebab27d38d61a930657ec9cc8ffce11630c3d10f819a0c1aed2a6adf83 --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@node1 ~]#

还需要安装网络插件才可以通信(master节点操作)

1
kubectl get nodes
1
2
3
4
5
6
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 21m v1.28.2
node1 NotReady <none> 63s v1.28.2
node2 NotReady <none> 61s v1.28.2
[root@master ~]#

安装网络插件(master节点操作)

下载配置文件

1
wget https://github.com/flannel-io/flannel/releases/download/v0.22.3/kube-flannel.yml

kube-flannel.yml的内容如下(内网harbo中提前准备flannel的镜像包)
如果下载不了可以复制如下信息.注意空格缩进需要完全一样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
apiVersion: v1
kind: Namespace
metadata:
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
name: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
kind: ConfigMap
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-cfg
namespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
name: kube-flannel-ds
namespace: kube-flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
app: flannel
k8s-app: flannel
tier: node
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
containers:
- args:
- --ip-masq
- --kube-subnet-mgr
command:
- /opt/bin/flanneld
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
image: docker.io/flannel/flannel:v0.22.3
name: kube-flannel
resources:
requests:
cpu: 100m
memory: 50Mi
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
volumeMounts:
- mountPath: /run/flannel
name: run
- mountPath: /etc/kube-flannel/
name: flannel-cfg
- mountPath: /run/xtables.lock
name: xtables-lock
hostNetwork: true
initContainers:
- args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: docker.io/flannel/flannel-cni-plugin:v1.2.0
name: install-cni-plugin
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
command:
- cp
image: docker.io/flannel/flannel:v0.22.3
name: install-cni
volumeMounts:
- mountPath: /etc/cni/net.d
name: cni
- mountPath: /etc/kube-flannel/
name: flannel-cfg
priorityClassName: system-node-critical
serviceAccountName: flannel
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- hostPath:
path: /run/flannel
name: run
- hostPath:
path: /opt/cni/bin
name: cni-plugin
- hostPath:
path: /etc/cni/net.d
name: cni
- configMap:
name: kube-flannel-cfg
name: flannel-cfg
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock

安装网络插件flannel(在master主节点操作)

1
kubectl apply -f kube-flannel.yml

显示如下信息

1
2
3
4
5
6
7
[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看pod 正在下载镜像(ImagePullBackOff)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-cz2nl 0/1 Init:ImagePullBackOff 0 15m
kube-flannel kube-flannel-ds-fvc6s 0/1 Init:ImagePullBackOff 0 15m
kube-flannel kube-flannel-ds-t6nj4 0/1 Init:ImagePullBackOff 0 15m
kube-system coredns-66f779496c-mgdkr 0/1 Pending 0 38m
kube-system coredns-66f779496c-rp7c8 0/1 Pending 0 38m
kube-system etcd-master 1/1 Running 0 39m
kube-system kube-apiserver-master 1/1 Running 0 39m
kube-system kube-controller-manager-master 1/1 Running 0 39m
kube-system kube-proxy-47xsf 1/1 Running 0 19m
kube-system kube-proxy-4rgzh 1/1 Running 0 19m
kube-system kube-proxy-gf8hr 1/1 Running 0 38m
kube-system kube-scheduler-master 1/1 Running 0 39m
[root@master ~]#

正常(Running)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@master ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-c22v9 1/1 Running 0 6s
kube-flannel kube-flannel-ds-vv4c8 1/1 Running 0 6s
kube-flannel kube-flannel-ds-zqllr 1/1 Running 0 6s
kube-system coredns-66f779496c-mgdkr 0/1 Pending 0 41m
kube-system coredns-66f779496c-rp7c8 0/1 Pending 0 41m
kube-system etcd-master 1/1 Running 0 41m
kube-system kube-apiserver-master 1/1 Running 0 41m
kube-system kube-controller-manager-master 1/1 Running 0 41m
kube-system kube-proxy-47xsf 1/1 Running 0 21m
kube-system kube-proxy-4rgzh 1/1 Running 0 21m
kube-system kube-proxy-gf8hr 1/1 Running 0 41m
kube-system kube-scheduler-master 1/1 Running 0 41m
[root@master ~]#

查询节点信息 已经正常(Ready)

1
2
3
4
5
6
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 42m v1.28.2
node1 Ready <none> 22m v1.28.2
node2 Ready <none> 22m v1.28.2
[root@master ~]#

k8s集群已经完成安装(完成)