存储 configMap ConfigMap 功能在 Kubernetes1.2 版本中引入,许多应用程序会从配置文件、命令行参数或环境变量中读取配 置信息。ConfigMap API 给我们提供了向容器中注入配置信息的机制,ConfigMap 可以被用来保存单个属性,也 可以用来保存整个配置文件或者 JSON 二进制大对象
ConfigMap 的创建 使用目录创建 创建配置文件多个配置温江放置于 目录 /data/app/k8s/configMap/configs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@master configMap]# cat /data/app/k8s/configMap/configs/myConfig1.properties enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 [root@master configMap]# cat /data/app/k8s/configMap/configs/myConfig2.properties color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice
创建 configmap
1 kubectl create configmap my-configs --from-file=/data/app/k8s/configMap/configs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [root@master configMap]# kubectl create configmap my-configs --from-file=/data/app/k8s/configMap/configs configmap/my-configs created [root@master configMap]# kubectl get configmaps NAME DATA AGE kube-root-ca.crt 1 45d my-configs 2 66s [root@master configMap]# kubectl describe configmaps my-configs Name: my-configs Namespace: default Labels: <none> Annotations: <none> Data ==== myConfig1.properties: ---- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 myConfig2.properties: ---- color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice BinaryData ==== Events: <none> [root@master configMap]#
—from-file 指定在目录下的所有文件都会被用在 ConfigMap 里面创建一个键值对,键的名字就是文件名,值就是文件的内容
使用文件创建 只要指定为一个文件就可以从单个文件中创建 ConfigMap
1 kubectl create configmap my-fill-config --from-file=/data/app/k8s/configMap/configs/myConfig1.properties
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 [root@master configMap]# kubectl create configmap my-fill-config --from-file=/data/app/k8s/configMap/configs/myConfig1.properties configmap/my-fill-config created [root@master configMap]# kubectl get configmaps NAME DATA AGE kube-root-ca.crt 1 45d my-configs 2 3m38s my-fill-config 1 5s [root@master configMap]# kubectl describe configmaps my-fill-config Name: my-fill-config Namespace: default Labels: <none> Annotations: <none> Data ==== myConfig1.properties: ---- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 BinaryData ==== Events: <none> [root@master configMap]#
—from-file 这个参数可以使用多次,你可以使用两次分别指定上个实例中的那两个配置文件,效果就跟指定整个目录是一样的
使用字面值创建 使用文字值创建,利用 —from-literal 参数传递配置信息,该参数可以使用多次,格式如下
1 kubectl create configmap k-v-config --from-literal=special.how=very --from-literal=special.type=charm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root@master configMap]# kubectl create configmap k-v-config --from-literal=special.how=very --from-literal=special.type=charm configmap/k-v-config created [root@master configMap]# kubectl get configmaps NAME DATA AGE k-v-config 2 3s kube-root-ca.crt 1 45d my-configs 2 5m49s my-fill-config 1 2m16s [root@master configMap]# kubectl describe configmaps k-v-config Name: k-v-config Namespace: default Labels: <none> Annotations: <none> Data ==== special.type: ---- charm special.how: ---- very BinaryData ==== Events: <none> [root@master configMap]#
将配置文件转换成yaml文件 可以通过命令将配置文件转换成yaml的格式
1 kubectl get configmaps my-configs -o yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [root@master configMap]# kubectl get configmaps my-configs -o yaml apiVersion: v1 data: myConfig1.properties: | enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 myConfig2.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: "2024-12-24T12:25:14Z" name: my-configs namespace: default resourceVersion: "189544" uid: b80b50f8-3b48-4982-a963-8b790fde20d5 [root@master configMap]#
Pod 中使用 ConfigMap 使用 ConfigMap 来替代环境变量 创建 部署配置文件 configMapEnv.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm --- apiVersion: v1 kind: ConfigMap metadata: name: env-config namespace: default data: log_level: INFO --- apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: 192.168 .16 .110 :20080/stady/myapp:v1 command: [ "/bin/sh" , "-c" , "env" ] env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type envFrom: - configMapRef: name: env-config restartPolicy: Never
special-config按键值导入 ,env-config全量导入
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 [root@master configMap]# kubectl apply -f configMapEnv.yaml configmap/special-config unchanged configmap/env-config unchanged pod/dapi-test-pod created [root@master configMap]# kubectl get pod NAME READY STATUS RESTARTS AGE dapi-test-pod 0/1 Completed 0 38s .... [root@master configMap]# kubectl logs dapi-test-pod MYAPP_SVC_PORT_80_TCP_ADDR=10.98.57.156 NGINX_SVC_SERVICE_HOST=10.104.90.73 KUBERNETES_PORT=tcp://10.96.0.1:443 MYAPP_SERVICE_PORT_HTTP=80 KUBERNETES_SERVICE_PORT=443 MYAPP_SVC_PORT_80_TCP_PORT=80 HOSTNAME=dapi-test-pod SHLVL=1 MYAPP_SVC_PORT_80_TCP_PROTO=tcp HOME=/root MYAPP_NODEPORT_PORT_80_TCP=tcp://10.99.67.70:80 NGINX_SVC_PORT=tcp://10.104.90.73:80 MYAPP_SERVICE_HOST=10.97.220.149 NGINX_SVC_SERVICE_PORT=80 SPECIAL_TYPE_KEY=charm MYAPP_SVC_PORT_80_TCP=tcp://10.98.57.156:80 MYAPP_NODEPORT_SERVICE_PORT_HTTP=80 NGINX_SVC_PORT_80_TCP_ADDR=10.104.90.73 MYAPP_SERVICE_PORT=80 MYAPP_PORT=tcp://10.97.220.149:80 NGINX_SVC_PORT_80_TCP_PORT=80 NGINX_SVC_PORT_80_TCP_PROTO=tcp NGINX_VERSION=1.12.2 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 MYAPP_NODEPORT_SERVICE_HOST=10.99.67.70 MYAPP_PORT_80_TCP_ADDR=10.97.220.149 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_PROTO=tcp MYAPP_PORT_80_TCP_PORT=80 MYAPP_PORT_80_TCP_PROTO=tcp NGINX_SVC_PORT_80_TCP=tcp://10.104.90.73:80 MYAPP_SVC_SERVICE_HOST=10.98.57.156 MYAPP_NODEPORT_SERVICE_PORT=80 MYAPP_NODEPORT_PORT=tcp://10.99.67.70:80 SPECIAL_LEVEL_KEY=very log_level=INFO KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 PWD=/ MYAPP_NODEPORT_PORT_80_TCP_ADDR=10.99.67.70 MYAPP_PORT_80_TCP=tcp://10.97.220.149:80 KUBERNETES_SERVICE_HOST=10.96.0.1 MYAPP_SVC_SERVICE_PORT=80 MYAPP_SVC_PORT=tcp://10.98.57.156:80 MYAPP_NODEPORT_PORT_80_TCP_PORT=80 MYAPP_NODEPORT_PORT_80_TCP_PROTO=tcp [root@master configMap]#
用 ConfigMap 设置命令行参数 创建配置文件 configMapCmd.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm --- apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: 192.168 .16 .110 :20080/stady/myapp:v1 command: [ "/bin/sh" , "-c" , "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ] env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never
1 2 3 4 5 6 7 8 9 10 11 12 13 [root@master configMap]# kubectl apply -f configMapCmd.yaml configmap/special-config created pod/dapi-test-pod created [root@master configMap]# [root@master configMap]# kubectl get pod NAME READY STATUS RESTARTS AGE dapi-test-pod 0/1 Completed 0 6s .... [root@master configMap]# [root@master configMap]# kubectl logs dapi-test-pod very charm [root@master configMap]#
通过数据卷插件使用ConfigMap 创建配置文件 configMapFile.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm --- apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: 192.168 .16 .110 :20080/stady/myapp:v1 command: [ "/bin/sh" , "-c" , "cat /etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config restartPolicy: Never
1 2 3 4 5 6 7 8 9 10 [root@master configMap]# kubectl apply -f configMapFile.yaml configmap/special-config created pod/dapi-test-pod created [root@master configMap]# kubectl get pod NAME READY STATUS RESTARTS AGE dapi-test-pod 0/1 Completed 0 6s ... [root@master configMap]# [root@master configMap]# kubectl logs dapi-test-pod very[root@master configMap]#
在数据卷里面使用这个 ConfigMap,有不同的选项。最基本的就是将文件填入数据卷,在这个文件中,键就是文件名,键值就是文件内容
ConfigMap 的热更新 构造部署配置文件 configMapHot.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 apiVersion: v1 kind: ConfigMap metadata: name: log-config namespace: default data: log_level: INFO --- apiVersion: apps/v1 kind: Deployment metadata: name: my-nginx spec: replicas: 1 selector: matchLabels: name: my-nginx template: metadata: labels: name: my-nginx spec: containers: - name: my-nginx image: 192.168 .16 .110 :20080/stady/myapp:v1 ports: - containerPort: 80 volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: log-config
1 2 3 4 5 6 7 8 [root@master configMap]# kubectl apply -f configMapHot.yaml configmap/log-config unchanged deployment.apps/my-nginx created [root@master configMap]# kubectl get pods -l name=my-nginx NAME READY STATUS RESTARTS AGE my-nginx-599788c876-sfxt8 1/1 Running 0 2m17s [root@master configMap]# kubectl exec $(kubectl get pods -l name=my-nginx -o=name|cut -d "/" -f2) -- cat /etc/config/log_level INFO[root@master configMap]#
修改 ConfigMap
1 kubectl edit configmap log-config
修改 log_level 的值为 DEBUG 等待大概 10 秒钟时间,再次查看环境变量的值
类似vi打开文件修改内容样操作 ,修改了之后保存退出1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: log_level: DEBUG kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"log_level":"INFO"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"log-config","namespace":"default"}} creationTimestamp: "2024-12-24T13:16:25Z" name: log-config namespace: default resourceVersion: "195129" uid: 4f221f29-8dfc-40e4-960e-cbed32609ede
等待大概10秒左右再次查询
1 2 3 4 [root@master configMap]# kubectl exec $(kubectl get pods -l name=my-nginx -o=name|cut -d "/" -f2) -- cat /etc/config/log_level DEBUG[root@master configMap]# [root@master configMap]#
ConfigMap 更新后滚动更新 Pod
更新 ConfigMap 目前并不会触发相关 Pod 的滚动更新,可以通过修改 pod annotations 的方式强制触发滚动更新
1 kubectl patch deployment my-nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "20241224" }}}}}'
查看容器已重新启动, Annotations 已更新 version/config: 20241224
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [root@master configMap]# kubectl patch deployment my-nginx --patch '{"spec": {"template": {"metadata": {"annotations": {"version/config": "20241224" }}}}}' deployment.apps/my-nginx patched [root@master configMap]# kubectl get pod NAME READY STATUS RESTARTS AGE my-nginx-64987b9879-p2btx 1/1 Running 0 5s nginx-dm-56996c5fdc-tmjws 1/1 Running 1 (21h ago) 22h nginx-dm-56996c5fdc-wcs8g 1/1 Running 1 (21h ago) 22h [root@master configMap]# kubectl describe deployment my-nginx Name: my-nginx Namespace: default CreationTimestamp: Tue, 24 Dec 2024 21:16:25 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 2 Selector: name=my-nginx Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: name=my-nginx Annotations: version/config: 20241224 Containers: my-nginx: Image: 192.168.16.110:20080/stady/myapp:v1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: /etc/config from config-volume (rw) Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: log-config Optional: false Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: my-nginx-599788c876 (0/0 replicas created) NewReplicaSet: my-nginx-64987b9879 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 9m22s deployment-controller Scaled up replica set my-nginx-599788c876 to 1 Normal ScalingReplicaSet 37s deployment-controller Scaled up replica set my-nginx-64987b9879 to 1 Normal ScalingReplicaSet 36s deployment-controller Scaled down replica set my-nginx-599788c876 to 0 from 1 [root@master configMap]#
更新 ConfigMap 后:
使用该 ConfigMap 挂载的 Env 不会同步更新
使用该 ConfigMap 挂载的 Volume 中的数据需要一段时间(实测大概10秒)才能同步更新
Secret Secret 解决了密码、token、密钥等敏感数据的配置问题,而不需要把这些敏感数据暴露到镜像或者 Pod Spec中。Secret 可以以 Volume 或者环境变量的方式使用
Secret 有三种类型:
Service Account :用来访问 Kubernetes API,由 Kubernetes 自动创建,并且会自动挂载到 Pod 的/run/secrets/kubernetes.io/serviceaccount 目录中
Opaque :base64编码格式的Secret,用来存储密码、密钥等
kubernetes.io/dockerconfigjson :用来存储私有 docker registry 的认证信息
Service Account Service Account 用来访问 Kubernetes API,由 Kubernetes 自动创建, 在 Kubernetes v1.24 之前的版本中,ServiceAccount 的 Secret 默认挂载路径是 /run/secrets/kubernetes.io/serviceaccount。 从 Kubernetes v1.24 开始,这个路径可能已经发生了变化默认路径是/var/run/secrets/kubernetes.io/serviceaccount.
以 flannel 的镜像来举例. 可以再挂载的目录中到找对应目录
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 [root@master configMap]# kubectl get pod -A | grep flan kube-flannel kube-flannel-ds-c22v9 1/1 Running 14 (122m ago) 45d kube-flannel kube-flannel-ds-vv4c8 1/1 Running 18 (122m ago) 45d kube-flannel kube-flannel-ds-zqllr 1/1 Running 16 (122m ago) 45d [root@master configMap]# [root@master configMap]# kubectl describe pod kube-flannel-ds-c22v9 -n kube-flannel Name: kube-flannel-ds-c22v9 Namespace: kube-flannel Priority: 2000001000 Priority Class Name: system-node-critical Service Account: flannel Node: master/192.168.16.200 Start Time: Sat, 09 Nov 2024 19:39:26 +0800 Labels: app=flannel controller-revision-hash=b54d875dc k8s-app=flannel pod-template-generation=1 tier=node Annotations: <none> Status: Running IP: 192.168.16.200 IPs: IP: 192.168.16.200 Controlled By: DaemonSet/kube-flannel-ds Init Containers: install-cni-plugin: Container ID: docker://bc78acea68d5c1b899240ef141065108a1b02f9a9140029257f9a7351fc9b240 Image: docker.io/flannel/flannel-cni-plugin:v1.2.0 Image ID: docker-pullable://192.168.16.110:20080/k8s/flannel-cni-plugin@sha256:2180bb74f60bea56da2e9be2004271baa6dccc0960b7aeaf43a97fc4de9b1ae0 Port: <none> Host Port: <none> Command: cp Args: -f /flannel /opt/cni/bin/flannel State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 24 Dec 2024 19:46:18 +0800 Finished: Tue, 24 Dec 2024 19:46:19 +0800 Ready: True Restart Count: 1 Environment: <none> Mounts: /opt/cni/bin from cni-plugin (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rp2q (ro) install-cni: Container ID: docker://6249eaae0226fa95c9bd3ce9087a6f1833f110bd41a95fcbd4c0aa4d527d1bcf Image: docker.io/flannel/flannel:v0.22.3 Image ID: docker-pullable://192.168.16.110:20080/k8s/flannel@sha256:b2bba065c46f3a54db41cd5181b87baa0fca64eda8b511838cdc147dfc59e76d Port: <none> Host Port: <none> Command: cp Args: -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 24 Dec 2024 19:46:20 +0800 Finished: Tue, 24 Dec 2024 19:46:20 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /etc/cni/net.d from cni (rw) /etc/kube-flannel/ from flannel-cfg (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rp2q (ro) Containers: kube-flannel: Container ID: docker://802dccfa129d6f489c133ccf817a925213928b366e121f09c999682851981480 Image: docker.io/flannel/flannel:v0.22.3 Image ID: docker-pullable://192.168.16.110:20080/k8s/flannel@sha256:b2bba065c46f3a54db41cd5181b87baa0fca64eda8b511838cdc147dfc59e76d Port: <none> Host Port: <none> Command: /opt/bin/flanneld Args: --ip-masq --kube-subnet-mgr State: Running Started: Tue, 24 Dec 2024 19:46:56 +0800 Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 24 Dec 2024 19:46:21 +0800 Finished: Tue, 24 Dec 2024 19:46:43 +0800 Ready: True Restart Count: 14 Requests: cpu: 100m memory: 50Mi Environment: POD_NAME: kube-flannel-ds-c22v9 (v1:metadata.name) POD_NAMESPACE: kube-flannel (v1:metadata.namespace) EVENT_QUEUE_DEPTH: 5000 Mounts: /etc/kube-flannel/ from flannel-cfg (rw) /run/flannel from run (rw) /run/xtables.lock from xtables-lock (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rp2q (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: run: Type: HostPath (bare host directory volume) Path: /run/flannel HostPathType: cni-plugin: Type: HostPath (bare host directory volume) Path: /opt/cni/bin HostPathType: cni: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: flannel-cfg: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-flannel-cfg Optional: false xtables-lock: Type: HostPath (bare host directory volume) Path: /run/xtables.lock HostPathType: FileOrCreate kube-api-access-5rp2q: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: :NoSchedule op=Exists node.kubernetes.io/disk-pressure:NoSchedule op=Exists node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/network-unavailable:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists node.kubernetes.io/pid-pressure:NoSchedule op=Exists node.kubernetes.io/unreachable:NoExecute op=Exists node.kubernetes.io/unschedulable:NoSchedule op=Exists Events: <none> [root@master configMap]# [root@master configMap]# [root@master configMap]# kubectl exec kube-flannel-ds-c22v9 -n kube-flannel -- ls /var/run/secrets/kubernetes.io/serviceaccount Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init) ca.crt namespace token [root@master configMap]#
Opaque Secret 创建说明 Opaque 类型的数据是一个 map 类型,要求 value 是 base64 编码格式:
1 2 3 4 5 [root@master configMap]# echo -n "admin" | base64 YWRtaW4= [root@master configMap]# echo -n "123456" | base64 MTIzNDU2 [root@master configMap]#
构造配置文件 secrets.yml
1 2 3 4 5 6 7 8 apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: password: MTIzNDU2 username: YWRtaW4=
查看结果
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [root@master secrets]# kubectl apply -f opaqueSecret.yaml secret/mysecret created [root@master secrets]# [root@master secrets]# kubectl get secrets NAME TYPE DATA AGE mysecret Opaque 2 32s [root@master secrets]# kubectl get secret mysecret NAME TYPE DATA AGE mysecret Opaque 2 48s [root@master secrets]# kubectl get secret mysecret -o yaml apiVersion: v1 data: password: MTIzNDU2 username: YWRtaW4= kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"password":"MTIzNDU2","username":"YWRtaW4="},"kind":"Secret","metadata":{"annotations":{},"name":"mysecret","namespace":"default"},"type":"Opaque"} creationTimestamp: "2024-12-24T13:58:37Z" name: mysecret namespace: default resourceVersion: "198953" uid: 9d744d72-34a2-448f-84d1-5955df366b24 type: Opaque [root@master secrets]# [root@master secrets]# kubectl get secret mysecret -o jsonpath="{.data.password}" | base64 --decode 123456[root@master secrets]# kubectl get secret mysecret -o jsonpath="{.data.username}" | base64 --decode admin[root@master secrets]#
使用方式 将 Secret 挂载到 Volume 中 构造配置文件 secretVolume.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVersion: v1 kind: Pod metadata: labels: name: secret-volume-test name: secret-volume-test spec: volumes: - name: secret-volume secret: secretName: mysecret containers: - image: 192.168 .16 .110 :20080/stady/myapp:v1 name: db-secret-volume volumeMounts: - name: secret-volume mountPath: '/etc/secrets' readOnly: true
检查
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@master secrets]# kubectl apply -f secretVolume.yaml pod/secret-volume-test created [root@master secrets]# [root@master secrets]# kubectl get pod NAME READY STATUS RESTARTS AGE my-nginx-64987b9879-p2btx 1/1 Running 0 46m nginx-dm-56996c5fdc-tmjws 1/1 Running 1 (22h ago) 23h nginx-dm-56996c5fdc-wcs8g 1/1 Running 1 (22h ago) 23h secret-volume-test 1/1 Running 0 9s [root@master secrets]# kubectl exec secret-volume-test -- ls /etc/secrets password username [root@master secrets]# kubectl exec secret-volume-test -- cat /etc/secrets/password 123456[root@master secrets]# kubectl exec secret-volume-test -- cat /etc/secrets/username admin[root@master secrets]#
将 Secret 挂载到 环境变量 中 构造配置文件 secretEnv.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 apiVersion: apps/v1 kind: Deployment metadata: name: secret-env-pod spec: replicas: 2 selector: matchLabels: name: secret-env template: metadata: labels: name: secret-env spec: containers: - name: secret-env-pod image: 192.168 .16 .110 :20080/stady/myapp:v1 ports: - containerPort: 80 env: - name: TEST_USER valueFrom: secretKeyRef: name: mysecret key: username - name: TEST_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password
检查pod的环境变量
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@master secrets]# kubectl apply -f secretEnv.yaml deployment.apps/secret-env-pod created [root@master secrets]# kubectl get pod NAME READY STATUS RESTARTS AGE my-nginx-64987b9879-p2btx 1/1 Running 0 76m nginx-dm-56996c5fdc-tmjws 1/1 Running 1 (22h ago) 23h nginx-dm-56996c5fdc-wcs8g 1/1 Running 1 (22h ago) 23h secret-env-pod-6469d4d977-9bbfh 1/1 Running 0 3s secret-env-pod-6469d4d977-hj58v 1/1 Running 0 3s secret-volume-test 1/1 Running 0 30m [root@master secrets]# kubectl exec secret-env-pod-6469d4d977-9bbfh -- env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=secret-env-pod-6469d4d977-9bbfh TEST_USER=admin TEST_PASSWORD=123456 .... [root@master secrets]#
kubernetes.io/dockerconfigjson 在harbor中有个私有仓库 secret-registry
操作前如果有登录hub的动作 master/node 节点需要都先执行退出命令
1 docker logout 192.168.16.110:20080
创建部署配置文件 secretRegistry1.yml
1 2 3 4 5 6 7 8 9 apiVersion: v1 kind: Pod metadata: name: myregistrykey-pod1 spec: containers: - name: myregistrykey-con image: 192.168 .16 .110 :20080/secret-registry/myapp:v1
可以看到pod启动失败,启动失败的原因是拉去镜像失败.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 [root@master secrets]# kubectl apply -f secretRegistry1.yaml pod/myregistrykey-pod1 created [root@master secrets]# kubectl get pod NAME READY STATUS RESTARTS AGE myregistrykey-pod1 0/1 ImagePullBackOff 0 4s ... [root@master secrets]# kubectl logs myregistrykey-pod1 Error from server (BadRequest): container "myregistrykey-con" in pod "myregistrykey-pod1" is waiting to start: trying and failing to pull image [root@master secrets]# kubectl describe pod myregistrykey-pod1 Name: myregistrykey-pod1 Namespace: default Priority: 0 Service Account: default Node: node1/192.168.16.201 Start Time: Tue, 24 Dec 2024 23:20:47 +0800 Labels: <none> Annotations: <none> Status: Pending IP: 10.244.1.75 IPs: IP: 10.244.1.75 Containers: myregistrykey-con: Container ID: Image: 192.168.16.110:20080/secret-registry/myapp:v1 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b52vw (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-b52vw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 25s default-scheduler Successfully assigned default/myregistrykey-pod1 to node1 Normal BackOff 22s (x2 over 23s) kubelet Back-off pulling image "192.168.16.110:20080/secret-registry/myapp:v1" Warning Failed 22s (x2 over 23s) kubelet Error: ImagePullBackOff Normal Pulling 11s (x2 over 24s) kubelet Pulling image "192.168.16.110:20080/secret-registry/myapp:v1" Warning Failed 11s (x2 over 24s) kubelet Failed to pull image "192.168.16.110:20080/secret-registry/myapp:v1": Error response from daemon: pull access denied for 192.168.16.110:20080/secret-registry/myapp, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Warning Failed 11s (x2 over 24s) kubelet Error: ErrImagePull [root@master secrets]#
使用 Kuberctl 创建 docker registry 认证的 secret
1 kubectl create secret docker-registry myregistrykey --docker-server=192.168.16.110:20080 --docker-username=admin --docker-password=Harbor123456 --docker-email=test@test.com
验证
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@master secrets]# kubectl create secret docker-registry myregistrykey --docker-server=192.168.16.110:20080 --docker-username=admin --docker-password=Harbor123456 --docker-email=test@test.com secret/myregistrykey created [root@master secrets]# kubectl get secret myregistrykey -o yaml apiVersion: v1 data: .dockerconfigjson: eyJhdXRocyI6eyIxOTIuMTY4LjE2LjExMDoyMDA4MCI6eyJ1c2VybmFtZSI6ImFkbWluIiwicGFzc3dvcmQiOiJIYXJib3IxMjM0NTYiLCJlbWFpbCI6InRlc3RAdGVzdC5jb20iLCJhdXRoIjoiWVdSdGFXNDZTR0Z5WW05eU1USXpORFUyIn19fQ== kind: Secret metadata: creationTimestamp: "2024-12-24T15:24:15Z" name: myregistrykey namespace: default resourceVersion: "207451" uid: 403655ee-5087-426e-a853-b13b51b4331f type: kubernetes.io/dockerconfigjson [root@master secrets]# kubectl get secret myregistrykey -o jsonpath="{.data['\.dockerconfigjson']}" | base64 --decode {"auths":{"192.168.16.110:20080":{"username":"admin","password":"Harbor123456","email":"test@test.com","auth":"YWRtaW46SGFyYm9yMTIzNDU2"}}}[root@master secrets]#
创建部署配置文件 secretRegistry2.yml
在创建 Pod 的时候,通过 imagePullSecrets 来引用
1 2 3 4 5 6 7 8 9 10 11 apiVersion: v1 kind: Pod metadata: name: myregistrykey-pod2 spec: containers: - name: myregistrykey-con image: 192.168 .16 .110 :20080/secret-registry/myapp:v1 imagePullSecrets: - name: myregistrykey
验证
可以正常下载惊醒,pod启动成功
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 [root@master secrets]# kubectl apply -f secretRegistry2.yaml pod/myregistrykey-pod2 created [root@master secrets]# kubectl get pod NAME READY STATUS RESTARTS AGE myregistrykey-pod1 0/1 ImagePullBackOff 0 4m55s myregistrykey-pod2 1/1 Running 0 5s ... [root@master secrets]# kubectl describe pod myregistrykey-pod2 Name: myregistrykey-pod2 Namespace: default Priority: 0 Service Account: default Node: node2/192.168.16.202 Start Time: Tue, 24 Dec 2024 23:25:37 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.2.63 IPs: IP: 10.244.2.63 Containers: myregistrykey-con: Container ID: docker://008d9ba05b100f3a38ed3a0dac6d5445c25de75dea712012c72ecaa8644090bd Image: 192.168.16.110:20080/secret-registry/myapp:v1 Image ID: docker-pullable://192.168.16.110:20080/secret-registry/myapp@sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e Port: <none> Host Port: <none> State: Running Started: Tue, 24 Dec 2024 23:25:38 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lnxd (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-4lnxd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 18s default-scheduler Successfully assigned default/myregistrykey-pod2 to node2 Normal Pulling 17s kubelet Pulling image "192.168.16.110:20080/secret-registry/myapp:v1" Normal Pulled 17s kubelet Successfully pulled image "192.168.16.110:20080/secret-registry/myapp:v1" in 93ms (93ms including waiting) Normal Created 17s kubelet Created container myregistrykey-con Normal Started 17s kubelet Started container myregistrykey-con [root@master secrets]#
Volume 容器磁盘上的文件的生命周期是短暂的,这就使得在容器中运行重要应用时会出现一些问题。首先,当容器崩溃时,kubelet 会重启它,但是容器中的文件将丢失——容器以干净的状态(镜像最初的状态)重新启动。其次,在Pod 中同时运行多个容器时,这些容器之间通常需要共享文件。Kubernetes 中的Volume 抽象就很好的解决了这些问题
Kubernetes 中的卷有明确的寿命 —— 与封装它的 Pod 相同。所f以,卷的生命比 Pod 中的所有容器都长,当这个容器重启时数据仍然得以保存。当然,当 Pod 不再存在时,卷也将不复存在。也许更重要的是,Kubernetes 支持多种类型的卷,Pod 可以同时使用任意数量的卷
卷的类型 Kubernetes 支持以下类型的卷:
awsElasticBlockStore azureDisk azureDisk cephfs csi downwardAPI emptyDir
fc flocker gcePersistentDisk gitRepo glusterfs hostPath iscsi local nfs
persistentVolumeClaim projected portworxVolume quobyte rbd scaleIO secret
storageos vsphereVolume
emptyDir 当 Pod 被分配给节点时,首先创建 emptyDir 卷,并且只要该 Pod 在该节点上运行,该卷就会存在。正如卷的名字所述,它最初是空的。Pod 中的容器可以读取和写入 emptyDir 卷中的相同文件, 尽管该卷可以挂载到每个容器中的相同或不同路径上。当出于任何原因从节点中删除 Pod 时,emptyDir 中的数据将被永久删除
emptyDir 的用法有:
暂存空间,例如用于基于磁盘的合并排序
用作长时间计算崩溃恢复时的检查点
Web服务器容器提供数据时,保存内容管理器容器提取的文件
构造一个部署配置文件 emptyDir.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 apiVersion: v1 kind: Pod metadata: name: empty-dir-pd spec: containers: - image: 192.168 .16 .110 :20080/stady/myapp:v1 name: empty-dir-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
hostPath hostPath 卷将主机节点的文件系统中的文件或目录挂载到集群中
hostPath 的用途如下:
运行需要访问 Docker 内部的容器;使用 /var/lib/docker 的 hostPath
在容器中运行 cAdvisor;使用 /dev/cgroups 的 hostPath
允许 pod 指定给定的 hostPath 是否应该在 pod 运行之前存在,是否应该创建,以及它应该以什么形式存在
除了所需的 path 属性之外,用户还可以为 hostPath 卷指定 type
参数类型
参数值
含义
默认
空字符串(默认)用于向后兼容,这意味着在挂载 hostPath 卷之前不会执行任何检查
DirectoryOrCreate
Path
如果给定路径不存在,则创建一个空目录,权限为 0755,属于 Kubelet 的组和用户。
Directory
Directory
给定路径必须是一个已经存在的目录。
FileOrCreate
File
如果给定路径不存在,则创建一个空文件,权限为 0644,属于 Kubelet 的组和用户。
File
给定路径必须是一个已经存在的文件。
Socket
给定路径必须是一个已经存在的 UNIX 套接字。
CharDevice
给定路径必须是一个已经存在的字符设备。
BlockDevice
给定路径必须是一个已经存在的块设备。
使用这种卷类型是请注意,因为:
由于每个节点上的文件都不同,具有相同配置(例如从 podTemplate 创建的)的 pod 在不同节点上的行为 可能会有所不同
当 Kubernetes 按照计划添加资源感知调度时,将无法考虑 hostPath 使用的资源
在底层主机上创建的文件或目录只能由 root 写入。您需要在特权容器中以 root 身份运行进程,或修改主机 上的文件权限以便写入 hostPath 卷
构造配置文件 hostPath.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 apiVersion: v1 kind: Pod metadata: name: host-path-pd spec: containers: - image: 192.168 .16 .110 :20080/stady/myapp:v1 name: host-path-container volumeMounts: - mountPath: /test-pd name: test-volume volumes: - name: test-volume hostPath: path: /data type: Directory
查看可以看到将node节点的/data目录 挂载到了pod容器中 /test-pd 位置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@master volume]# kubectl apply -f hostPath.yaml pod/host-path-pd created [root@master volume]# kubectl get pod NAME READY STATUS RESTARTS AGE empty-dir-pd 1/1 Running 0 13m host-path-pd 1/1 Running 0 8s nginx-dm-56996c5fdc-tmjws 1/1 Running 1 (23h ago) 25h nginx-dm-56996c5fdc-wcs8g 1/1 Running 1 (23h ago) 25h [root@master volume]# kubectl exec host-path-pd -- ls /test-pd/ DS Miniconda3-py39_23.9.0-0-Linux-x86_64.sh app bak docker k8s log pkg tmp [root@master volume]#
PersistentVolume 概念 PersistentVolume (PV) 是由管理员设置的存储,它是群集的一部分。就像节点是集群中的资源一样,PV 也是集群中的资源。 PV 是 Volume 之类的卷插件,但具有独立于使用 PV 的 Pod 的生命周期。此 API 对象包含存储实现的细节,即 NFS、 iSCSI 或特定于云供应商的存储系统
PersistentVolumeClaim (PVC) 是用户存储的请求。它与 Pod 相似。Pod 消耗节点资源,PVC 消耗 PV 资源。Pod 可以请求特定级别的资源 (CPU 和内存)。声明可以请求特定的大小和访问模式(例如,可以以读/写一次或 只读多次模式挂载)
静态 pv 集群管理员创建一些 PV。它们带有可供群集用户使用的实际存储的细节。它们存在于 Kubernetes API 中,可用于消费
动态 当管理员创建的静态 PV 都不匹配用户的 PersistentVolumeClaim 时,集群可能会尝试动态地为 PVC 创建卷。此配置基于 StorageClasses :PVC 必须请求 [存储类],并且管理员必须创建并配置该类才能进行动态创建。声明该类为 “” 可以有效地禁用其动态配置
要启用基于存储级别的动态存储配置,集群管理员需要启用 API server 上的。例如,通过确保 DefaultStorageClass [准入控制器] DefaultStorageClass 位于 API server 组件的 –admission-control 标志,使用逗号分隔的有序值列表中,可以完成此操作
绑定 master 中的控制环路监视新的 PVC,寻找匹配的 PV(如果可能),并将它们绑定在一起。如果为新的 PVC 动态 调配 PV,则该环路将始终将该 PV 绑定到 PVC。否则,用户总会得到他们所请求的存储,但是容量可能超出要求 的数量。一旦 PV 和 PVC 绑定后, PersistentVolumeClaim 绑定是排他性的,不管它们是如何绑定的。 PVC 跟 PV 绑定是一对一的映射
持久化卷声明的保护 PVC 保护的目的是确保由 pod 正在使用的 PVC 不会从系统中移除,因为如果被移除的话可能会导致数据丢失 当启用PVC 保护 alpha 功能时,如果用户删除了一个 pod 正在使用的 PVC,则该 PVC 不会被立即删除。PVC 的 删除将被推迟,直到 PVC 不再被任何 pod 使用
持久化卷类型 PersistentVolume 类型以插件形式实现。Kubernetes 目前支持以下插件类型:
GCEPersistentDisk AWSElasticBlockStore AzureFile AzureDisk FC (Fibre Channel)
FlexVolume Flocker NFS iSCSI RBD (Ceph Block Device) CephFS
Cinder (OpenStack block storage) Glusterfs VsphereVolume Quobyte Volumes
HostPath VMware Photon Portworx Volumes ScaleIO Volumes StorageOS
持久化卷的演示方案
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: 172.17 .0 .2
PV 访问模式 PersistentVolume 可以以资源提供者支持的任何方式挂载到主机上。如下表所示,供应商具有不同的功能,每个PV 的访问模式都将被设置为该卷支持的特定模式。例如,NFS 可以支持多个读/写客户端,但特定的 NFS PV 可能以只读方式导出到服务器上。每个 PV 都有一套自己的用来描述特定功能的访问模式
ReadWriteOnce——该卷可以被单个节点以读/写模式挂载
ReadOnlyMany——该卷可以被多个节点以只读模式挂载
ReadWriteMany——该卷可以被多个节点以读/写模式挂载
在命令行中,访问模式缩写为:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
回收策略
Retain(保留)——手动回收
Recycle(回收)——基本擦除(rm -rf /thevolume/*)
Delete(删除)——关联的存储资产(例如 AWS EBS、GCE PD、Azure Disk 和 OpenStack Cinder 卷)将被删除
当前,只有 NFS 和 HostPath 支持回收策略。AWS EBS、GCE PD、Azure Disk 和 Cinder 卷支持删除策略
状态 卷可以处于以下的某种状态:
Available(可用)——一块空闲资源还没有被任何声明绑定
Bound(已绑定)——卷已经被声明绑定
Released(已释放)——声明被删除,但是资源还未被集群重新声
Failed(失败)——该卷的自动回收失败
命令行会显示绑定到 PV 的 PVC 的名称
持久化演示说明 - NFS 安装 NFS 服务器 所有的节点都需要安装
在master主机使用是nfs 服务1 yum install -y nfs-common nfs-utils rpcbind
在node节点使用的是nfs客户端1 yum install -y nfs-utils rpcbind
配置增加nfs的目录
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@master ~]# mkdir /nfsdata [root@master ~]# chmod 666 /nfsdata [root@master ~]# chown nfsnobody /nfsdata [root@master ~]# cat /etc/exports [root@master ~]# cat > /etc/exports <<EOF > /nfsdata/nfspv1 *(rw,no_root_squash,no_all_squash,sync) > /nfsdata/nfspv2 *(rw,no_root_squash,no_all_squash,sync) > /nfsdata/nfspv3 *(rw,no_root_squash,no_all_squash,sync) > /nfsdata/nfspv4 *(rw,no_root_squash,no_all_squash,sync) > /nfsdata/nfspv5 *(rw,no_root_squash,no_all_squash,sync) > /nfsdata/nfspv6 *(rw,no_root_squash,no_all_squash,sync) > EOF [root@master ~]# systemctl start rpcbind [root@master ~]# systemctl start nfs [root@master ~]#
查看nfs提供服务的目录
1 2 3 4 [root@master ~]# showmount -e 192.168.16.200 Export list for 192.168.16.200: /nfsdata * [root@master ~]#
部署 PV 构建配置文件 nfspv1.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 apiVersion: v1 kind: PersistentVolume metadata: name: nfspv1 spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: /nfsdata/nfspv1 server: 192.168 .16 .200
1 2 3 4 5 6 7 8 [root@master pvc]# kubectl apply -f nfspv1.yaml persistentvolume/nfspv1 created [root@master pvc]# [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 6m2s [root@master pvc]#
创建服务并使用 PVC
测试的副本是3 ,需要准备至少三个PV . 所以我们重新创建几个不同pv
1 2 mkdir /nfsdata/nfspv{2..6} chmod 777 /nfsdata/nfspv{2..6}
构造 多个pv 配置文件 morePv.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 apiVersion: v1 kind: PersistentVolume metadata: name: nfspv2 spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: /nfsdata/nfspv2 server: 192.168 .16 .200 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv3 spec: capacity: storage: 4Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: /nfsdata/nfspv3 server: 192.168 .16 .200 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv4 spec: capacity: storage: 8Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: /nfsdata/nfspv4 server: 192.168 .16 .200 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv5 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: nfs nfs: path: /nfsdata/nfspv5 server: 192.168 .16 .200 --- apiVersion: v1 kind: PersistentVolume metadata: name: nfspv6 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: nfs nfs: path: /nfsdata/nfspv6 server: 192.168 .16 .200
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@master pvc]# kubectl apply -f morePv.yaml persistentvolume/nfspv2 created persistentvolume/nfspv3 created persistentvolume/nfspv4 created persistentvolume/nfspv5 created persistentvolume/nfspv6 created [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 12m nfspv2 2Gi RWO Retain Available nfs 8s nfspv3 4Gi RWO Retain Available nfs 8s nfspv4 8Gi RWO Retain Available nfs 8s nfspv5 1Gi RWO Recycle Available nfs 8s nfspv6 1Gi RWO Delete Available nfs 8s
构造pvc测试的配置文件 pvc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 apiVersion: v1 kind: Service metadata: name: pvc-svc labels: app: nginx spec: ports: - port: 80 name: pvc-web clusterIP: None selector: app: pvc-nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: stateful-set-web spec: selector: matchLabels: app: pvc-nginx serviceName: pvc-svc replicas: 3 template: metadata: labels: app: pvc-nginx spec: containers: - name: nginx-c image: 192.168 .16 .110 :20080/stady/myapp:v1 ports: - containerPort: 80 name: pvc-web volumeMounts: - name: pvc-vm-www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: pvc-vm-www spec: accessModes: [ "ReadWriteOnce" ] storageClassName: nfs resources: requests: storage: 1Gi
查询 pod创建成功 ,并且 有三个 pv已经被绑定.
并且选取的资源是存储尽量不浪费的pv.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@master pvc]# kubectl apply -f pvc.yaml service/pvc-svc created statefulset.apps/stateful-set-web created [root@master pvc]# kubectl get pod NAME READY STATUS RESTARTS AGE empty-dir-pd 1/1 Running 1 (21h ago) 21h host-path-pd 1/1 Running 1 (21h ago) 21h nginx-dm-56996c5fdc-tmjws 1/1 Running 2 (21h ago) 46h nginx-dm-56996c5fdc-wcs8g 1/1 Running 2 (21h ago) 46h stateful-set-web-0 1/1 Running 0 7s stateful-set-web-1 1/1 Running 0 5s stateful-set-web-2 1/1 Running 0 2s [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 13m nfspv2 2Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-2 nfs 86s nfspv3 4Gi RWO Retain Available nfs 86s nfspv4 8Gi RWO Retain Available nfs 86s nfspv5 1Gi RWO Recycle Bound default/pvc-vm-www-stateful-set-web-0 nfs 86s nfspv6 1Gi RWO Delete Bound default/pvc-vm-www-stateful-set-web-1 nfs 86s [root@master pvc]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-vm-www-stateful-set-web-0 Bound nfspv5 1Gi RWO nfs 18s pvc-vm-www-stateful-set-web-1 Bound nfspv6 1Gi RWO nfs 16s pvc-vm-www-stateful-set-web-2 Bound nfspv2 2Gi RWO nfs 13s [root@master pvc]#
在web-0的存储写入 index.html 通过http服务已经能够请求看到写入的内容 测试存储已正常可用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 82m nfspv2 2Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-2 nfs 28s nfspv3 4Gi RWO Retain Available nfs 28s nfspv4 8Gi RWO Retain Available nfs 28s nfspv5 1Gi RWO Recycle Bound default/pvc-vm-www-stateful-set-web-0 nfs 28s nfspv6 1Gi RWO Delete Bound default/pvc-vm-www-stateful-set-web-1 nfs 28s [root@master pvc]# cat > /nfsdata/nfspv5/index.html <<EOF > aaaaa > EOF [root@master pvc]# [root@master pvc]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES empty-dir-pd 1/1 Running 1 (22h ago) 23h 10.244.1.77 node1 <none> <none> host-path-pd 1/1 Running 1 (22h ago) 23h 10.244.2.67 node2 <none> <none> nginx-dm-56996c5fdc-tmjws 1/1 Running 2 (22h ago) 2d 10.244.1.79 node1 <none> <none> nginx-dm-56996c5fdc-wcs8g 1/1 Running 2 (22h ago) 2d 10.244.2.68 node2 <none> <none> stateful-set-web-0 1/1 Running 0 98s 10.244.1.100 node1 <none> <none> stateful-set-web-1 1/1 Running 0 95s 10.244.2.76 node2 <none> <none> stateful-set-web-2 1/1 Running 0 92s 10.244.1.101 node1 <none> <none> [root@master pvc]# curl 10.244.1.100 aaaaa [root@master pvc]#
当前占用的pv资源的类型分别有 Delete / Recycle /Retain POD删除时PVC 还是保留, PV的状态还是绑定
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@master pvc]# kubectl delete -f pvc.yaml service "pvc-svc" deleted statefulset.apps "stateful-set-web" deleted [root@master pvc]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-vm-www-stateful-set-web-0 Bound nfspv5 1Gi RWO nfs 3m7s pvc-vm-www-stateful-set-web-1 Bound nfspv6 1Gi RWO nfs 3m5s pvc-vm-www-stateful-set-web-2 Bound nfspv2 2Gi RWO nfs 3m2s [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 5m23s nfspv2 2Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-2 nfs 5m16s nfspv3 4Gi RWO Retain Available nfs 5m16s nfspv4 8Gi RWO Retain Available nfs 5m16s nfspv5 1Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-0 nfs 5m16s nfspv6 1Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-1 nfs 5m16s
删除pvc 之后 pv的状态 不相同 还没有变更为 Available ,却变更为 Released / Failed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@master pvc]# kubectl delete pvc pvc-vm-www-stateful-set-web-2 persistentvolumeclaim "pvc-vm-www-stateful-set-web-2" deleted [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 6m23s nfspv2 2Gi RWO Retain Released default/pvc-vm-www-stateful-set-web-2 nfs 6m16s nfspv3 4Gi RWO Retain Available nfs 6m16s nfspv4 8Gi RWO Retain Available nfs 6m16s nfspv5 1Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-0 nfs 6m16s nfspv6 1Gi RWO Retain Bound default/pvc-vm-www-stateful-set-web-1 nfs 6m16s [root@master pvc]# [root@master pvc]# kubectl delete pvc pvc-vm-www-stateful-set-web-0 pvc-vm-www-stateful-set-web-1 pvc-vm-www-stateful-set-web-2 persistentvolumeclaim "pvc-vm-www-stateful-set-web-0" deleted persistentvolumeclaim "pvc-vm-www-stateful-set-web-1" deleted persistentvolumeclaim "pvc-vm-www-stateful-set-web-2" deleted [root@master pvc]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfspv1 10Gi RWO Retain Available nfs 64s nfspv2 2Gi RWO Retain Released default/pvc-vm-www-stateful-set-web-2 nfs 59s nfspv3 4Gi RWO Retain Available nfs 59s nfspv4 8Gi RWO Retain Available nfs 59s nfspv5 1Gi RWO Recycle Failed default/pvc-vm-www-stateful-set-web-0 nfs 59s nfspv6 1Gi RWO Delete Failed default/pvc-vm-www-stateful-set-web-1 nfs 59s
对于 NFS 类型的 PersistentVolume(PV),Kubernetes 支持以下 reclaimPolicy:
Retain:这是默认的回收策略。当 PVC 被删除时,PV 不会被自动回收或删除,需要管理员手动处理 PV 上的数据。
Recycle:这个回收策略尝试删除 PV 上的所有内容,以便 PV 可以被重新使用。但是,并非所有的 Volume 插件都支持 Recycle 策略,NFS 就是其中之一。
Delete:这个回收策略会删除 PV 以及其关联的存储资源。对于动态供应的 PV,这个策略通常被支持,但对于静态配置的 PV(如 NFS),通常不被支持,因为 NFS PV 的生命周期通常不由 Kubernetes 管理。 因此,对于 NFS PV,您通常只能使用 Retain 策略,并且需要手动清理数据,或者使用其他机制来自动化清理过程1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@master pvc]# kubectl describe pv nfspv6 Name: nfspv6 Labels: <none> Annotations: pv.kubernetes.io/bound-by-controller: yes Finalizers: [kubernetes.io/pv-protection] StorageClass: nfs Status: Failed Claim: default/pvc-vm-www-stateful-set-web-1 Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 1Gi Node Affinity: <none> Message: error getting deleter volume plugin for volume "nfspv6": no deletable volume plugin matched Source: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.16.200 Path: /nfsdata/nfspv6 ReadOnly: false Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning VolumeFailedDelete 26s persistentvolume-controller error getting deleter volume plugin for volume "nfspv6": no deletable volume plugin matched
简单的处理方法. 对文件目录做必要处理(比如备份,删除)完成之后, 手动删除对应 PV 重新创建即可.
关于 StatefulSet
匹配 Pod name ( 网络标识 ) 的模式为:$(statefulset名称)-$(序号),比如上面的示例:web-0,web-1, web-2
StatefulSet 为每个 Pod 副本创建了一个 DNS 域名,这个域名的格式为: $(podname).(headless server name),也就意味着服务间是通过Pod域名来通信而非 Pod IP,因为当Pod所在Node发生故障时, Pod 会 被飘移到其它 Node 上,Pod IP 会发生变化,但是 Pod 域名不会有变化
StatefulSet 使用 Headless 服务来控制 Pod 的域名,这个域名的 FQDN 为:$(service name).$(namespace).svc.cluster.local,其中,“cluster.local” 指的是集群的域名
根据 volumeClaimTemplates,为每个 Pod 创建一个 pvc,pvc 的命名规则匹配模式:(volumeClaimTemplates.name)-(pod_name),比如上面的 volumeMounts.name=www, Pod name=web-[0-2],因此创建出来的 PVC 是 www-web-0、www-web-1、www-web-2
删除 Pod 不会删除其 pvc,手动删除 pvc 将自动释放 pv
Statefulset的启停顺序:
有序部署:部署StatefulSet时,如果有多个Pod副本,它们会被顺序地创建(从0到N-1)并且,在下一个 Pod运行之前所有之前的Pod必须都是Running和Ready状态。
有序删除:当Pod被删除时,它们被终止的顺序是从N-1到0。
有序扩展:当对Pod执行扩展操作时,与部署一样,它前面的Pod必须都处于Running和Ready状态。
StatefulSet使用场景:
稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于 PVC 来实现。
稳定的网络标识符,即 Pod 重新调度后其 PodName 和 HostName 不变。
有序部署,有序扩展,基于 init containers 来实现。
有序收缩。
集群调度器 调度说明 Scheduler 是 kubernetes 的调度器,主要的任务是把定义的 pod 分配到集群的节点上。听起来非常简单,但有 很多要考虑的问题:
公平:如何保证每个节点都能被分配资源
资源高效利用:集群所有资源最大化被使用
效率:调度的性能要好,能够尽快地对大批量的 pod 完成调度工作
灵活:允许用户根据自己的需求控制调度的逻辑 Sheduler 是作为单独的程序运行的,启动之后会一直坚挺 API Server,获取 PodSpec.NodeName 为空的 pod,对每个 pod 都会创建一个 binding,表明该 pod 应该放到哪个节点上
调度过程 调度分为几个部分:首先是过滤掉不满足条件的节点,这个过程称为 predicate ;然后对通过的节点按照优先级 排序,这个是 priority ;最后从中选择优先级最高的节点。如果中间任何一步骤有错误,就直接返回错误 Predicate 有一系列的算法可以使用:
PodFitsResources :节点上剩余的资源是否大于 pod 请求的资源
PodFitsHost :如果 pod 指定了 NodeName,检查节点名称是否和 NodeName 匹配
PodFitsHostPorts :节点上已经使用的 port 是否和 pod 申请的 port 冲突
PodSelectorMatches :过滤掉和 pod 指定的 label 不匹配的节点
NoDiskConflict :已经 mount 的 volume 和 pod 指定的 volume 不冲突,除非它们都是只读
如果在 predicate 过程中没有合适的节点,pod 会一直在 pending 状态,不断重试调度,直到有节点满足条件。 经过这个步骤,如果有多个节点满足条件,就继续 priorities 过程: 按照优先级大小对节点排序
优先级由一系列键值对组成,键是该优先级项的名称,值是它的权重(该项的重要性)。这些优先级选项包括:
LeastRequestedPriority :通过计算 CPU 和 Memory 的使用率来决定权重,使用率越低权重越高。换句话说,这个优先级指标倾向于资源使用比例更低的节点
BalancedResourceAllocation :节点上 CPU 和 Memory 使用率越接近,权重越高。这个应该和上面的一起使用,不应该单独使用
ImageLocalityPriority :倾向于已经有要使用镜像的节点,镜像总大小值越大,权重越高
通过算法对所有的优先级项目和权重进行计算,得出最终的结果
自定义调度器 除了 kubernetes 自带的调度器,你也可以编写自己的调度器。通过spec:schedulername 参数指定调度器的名字,可以为 pod 选择某个调度器进行调度。比如下面的 pod 选择 my-scheduler 进行调度,而不是默认的 default-scheduler :
1 2 3 4 5 6 7 8 9 10 11 apiVersion: v1 kind: Pod metadata: name: annotation-second-scheduler labels: name: multischeduler-example spec: schedulername: my-scheduler containers: - name: pod-with-second-annotation-container image: gcr.io/google_containers/pause:2.0
调度亲和性 节点亲和性 pod.spec.nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution:硬策略
preferredDuringSchedulingIgnoredDuringExecution:软策略
requiredDuringSchedulingIgnoredDuringExecution 根据命令可以查询到节点上的标签
1 2 3 4 5 6 [root@master ~]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS master Ready control-plane 46d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= node1 Ready <none> 46d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux node2 Ready <none> 46d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux [root@master ~]#
构造硬亲和策略, 不在node2节点 配置文件 affiPod1.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 apiVersion: v1 kind: Pod metadata: name: affinity labels: app: node-affinity-pod spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - node2
多次创建观察所在节点,始终不会是node2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node1 <none> <none> [root@master affi]# [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node1 <none> <none> [root@master affi]# [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node1 <none> <none> [root@master affi]#
将配置中 operator: NotIn 该成 operator: In 之后再次尝试 ,始终会在node2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]# [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]# [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]# [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]#
将node2 该成不存在的节点名称 node3, pod就会始终处在 Pending , 因为没有满足条件的 node3节点1 2 3 4 5 6 [root@master affi]# kubectl delete -f affiPod1.yaml && kubectl apply -f affiPod1.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 Pending 0 0s <none> <none> <none> <none> [root@master affi]#
preferredDuringSchedulingIgnoredDuringExecution 构造软亲和策略, 尽可能在node3节点 配置文件 affiPod2.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 apiVersion: v1 kind: Pod metadata: name: affinity labels: app: node-affinity-pod spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: kubernetes.io/hostname operator: In values: - node3
虽然没有node3 但是会创建成功. 最佳匹配节点是node1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@master affi]# kubectl delete -f affiPod2.yaml && kubectl apply -f affiPod2.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node1 <none> <none> [root@master affi]# kubectl delete -f affiPod2.yaml && kubectl apply -f affiPod2.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node1 <none> <none> [root@master affi]# kubectl delete -f affiPod2.yaml && kubectl apply -f affiPod2.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node1 <none> <none> [root@master affi]#
将配置中的node3该成node2, 如果node2资源充足会优先启动在node2节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@master affi]# kubectl delete -f affiPod2.yaml && kubectl apply -f affiPod2.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]# kubectl delete -f affiPod2.yaml && kubectl apply -f affiPod2.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]# kubectl delete -f affiPod2.yaml && kubectl apply -f affiPod2.yaml && kubectl get pod -o wide pod "affinity" deleted pod/affinity created NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity 0/1 ContainerCreating 0 0s <none> node2 <none> <none> [root@master affi]#
合体 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 apiVersion: v1 kind: Pod metadata: name: affinity labels: app: node-affinity-pod spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: NotIn values: - node02 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: source operator: In values: - ttt
可以通过如下命令给node节点打标签并在节点亲和性中引用
1 kubectl label nodes <node-name> <key>=<value>
1 2 3 4 5 6 7 8 [root@master affi]# kubectl label nodes node1 source=ttt node/node1 labeled [root@master affi]# kubectl get node --show-labels NAME STATUS ROLES AGE VERSION LABELS master Ready control-plane 46d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers= node1 Ready <none> 46d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1,kubernetes.io/os=linux,source=ttt node2 Ready <none> 46d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux [root@master affi]#
键值运算关系
In:label 的值在某个列表中
NotIn:label 的值不在某个列表中
Gt:label 的值大于某个值
Lt:label 的值小于某个值
Exists:某个 label 存在
DoesNotExist:某个 label 不存在
Pod 亲和性 pod.spec.affinity.podAffinity/podAntiAffinity
preferredDuringSchedulingIgnoredDuringExecution:软策略
requiredDuringSchedulingIgnoredDuringExecution:硬策略
构造配置文件 affiPod3.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 apiVersion: v1 kind: Pod metadata: name: affinity-pod-1 labels: app: pod-1 spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: affinity-pod-2 labels: app: pod-2 spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: affinity-pod-3 labels: app: pod-3 spec: containers: - name: pod-3 image: 192.168 .16 .110 :20080/stady/myapp:v1 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - pod-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - pod-2 topologyKey: kubernetes.io/hostname
1 2 3 4 5 6 7 8 9 10 [root@master affi ] pod/affinity-pod-1 created pod/affinity-pod-2 created pod/affinity-pod-3 created [root@master affi ] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES affinity-pod-1 1 /1 Running 0 7s 10.244 .1 .126 node1 <none> <none> affinity-pod-2 1 /1 Running 0 7s 10.244 .2 .94 node2 <none> <none> affinity-pod-3 1 /1 Running 0 7s 10.244 .1 .125 node1 <none> <none> [root@master affi ]
将app1 与 app2的位置在配置文件中调换. 默认会将pod1 创建到node2节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 apiVersion: v1 kind: Pod metadata: name: affinity-pod-2 labels: app: pod-2 spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: affinity-pod-1 labels: app: pod-1 spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 --- apiVersion: v1 kind: Pod metadata: name: affinity-pod-3 labels: app: pod-3 spec: containers: - name: pod-3 image: 192.168 .16 .110 :20080/stady/myapp:v1 affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - pod-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 podAffinityTerm: labelSelector: matchExpressions: - key: app operator: In values: - pod-2 topologyKey: kubernetes.io/hostname
pod3 因为存在亲和性策略也会创建到node2节点
1 2 3 4 5 6 7 8 9 10 11 12 [root@master affi ] pod/affinity-pod-2 created pod/affinity-pod-1 created pod/affinity-pod-3 created [root@master affi ] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS affinity-pod-1 1 /1 Running 0 3s 10.244 .2 .96 node2 <none> <none> app=pod-1 affinity-pod-2 1 /1 Running 0 3s 10.244 .1 .129 node1 <none> <none> app=pod-2 affinity-pod-3 1 /1 Running 0 3s 10.244 .2 .97 node2 <none> <none> app=pod-3 nginx-dm-56996c5fdc-n88vt 1 /1 Running 0 3m35s 10.244 .2 .93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-z825r 1 /1 Running 0 3m35s 10.244 .1 .124 node1 <none> <none> name=nginx,pod-template-hash=56996c5fdc [root@master affi ]
亲和性/反亲和性调度策略比较如下:
调度策略匹配
标签操作符拓扑域
支持调度目标
说明
nodeAffinity
主机
In, NotIn, Exists, DoesNotExist, Gt, Lt
否
podAffinity
POD
In, NotIn, Exists, DoesNotExist
是
podAntiAffinity
POD
In, NotIn, Exists, DoesNotExist
是
污点与容忍 节点亲和性,是 pod 的一种属性(偏好或硬性要求),它使 pod 被吸引到一类特定的节点。Taint 则相反,它使节点能够排斥一类特定的pod
Taint 和 toleration 相互配合,可以用来避免 pod 被分配到不合适的节点上。每个节点上都可以应用一个或多个 taint ,这表示对于那些不能容忍这些 taint 的 pod,是不会被该节点接受的。如果将 toleration 应用于 pod 上,则表示这些 pod 可以(但不要求)被调度到具有匹配 taint 的节点上
污点(Taint) 污点 ( Taint ) 的组成 使用 kubectl taint 命令可以给某个 Node 节点设置污点,Node 被设置上污点之后就和 Pod 之间存在了一种相 斥的关系,可以让 Node 拒绝 Pod 的调度执行,甚至将 Node 已经存在的 Pod 驱逐出去
每个污点的组成如下:
每个污点有一个 key 和 value 作为污点的标签,其中 value 可以为空,effect 描述污点的作用。当前 taint effect 支持如下三个选项:
NoSchedule :表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上
PreferNoSchedule :表示 k8s 将尽量避免将 Pod 调度到具有该污点的 Node 上
NoExecute :表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上,同时会将 Node 上已经存在的 Pod 驱逐出去
污点的设置、查看和去除 1 2 3 4 5 6 # 设置污点 kubectl taint nodes node1 key1=value1:NoSchedule # 节点说明中,查找 Taints 字段 kubectl describe pod pod-name # 去除污点 kubectl taint nodes node1 key1:NoSchedule-
可以查看master节点. 存在一个污点 Taints: node-role.kubernetes.io/control-plane:NoSchedule. 所以一般的pod调用的时候不会分配在master节点上的原因就是因为存在这个污点标签
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 [root@master affi]# kubectl describe node master Name: master Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=master kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"a2:80:27:18:b8:43"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 192.168.16.200 kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 09 Nov 2024 18:57:43 +0800 Taints: node-role.kubernetes.io/control-plane:NoSchedule Unschedulable: false Lease: HolderIdentity: master AcquireTime: <unset> RenewTime: Thu, 26 Dec 2024 22:08:50 +0800 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- NetworkUnavailable False Thu, 26 Dec 2024 19:45:59 +0800 Thu, 26 Dec 2024 19:45:59 +0800 FlannelIsUp Flannel is running on this node MemoryPressure False Thu, 26 Dec 2024 22:08:45 +0800 Sat, 09 Nov 2024 18:57:42 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 26 Dec 2024 22:08:45 +0800 Sat, 09 Nov 2024 18:57:42 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 26 Dec 2024 22:08:45 +0800 Sat, 09 Nov 2024 18:57:42 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 26 Dec 2024 22:08:45 +0800 Sat, 09 Nov 2024 19:39:36 +0800 KubeletReady kubelet is posting ready status Addresses:
查询当前存在的pod,并且在node1节点打 NoSchedule 污点.
1 kubectl taint nodes node1 check=MyCheck:NoExecute
affinity-pod-2 原来启动在node1上被删除掉了
nginx-dm-56996c5fdc-z825r 原来启动在node1上被删除掉了 运维该pod是通过deployment启动的,会维持在2个副本,重新启动的pod不会再启动再node1节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@master affi]# kubectl taint nodes node1 check=MyCheck:NoExecute node/node1 tainted [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS affinity-pod-1 1/1 Running 0 22m 10.244.2.96 node2 <none> <none> app=pod-1 affinity-pod-3 1/1 Running 0 22m 10.244.2.97 node2 <none> <none> app=pod-3 nginx-dm-56996c5fdc-ddk2t 0/1 ContainerCreating 0 2s <none> node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-n88vt 1/1 Running 0 26m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS affinity-pod-1 1/1 Running 0 23m 10.244.2.96 node2 <none> <none> app=pod-1 affinity-pod-3 1/1 Running 0 23m 10.244.2.97 node2 <none> <none> app=pod-3 nginx-dm-56996c5fdc-ddk2t 1/1 Running 0 10s 10.244.2.98 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-n88vt 1/1 Running 0 26m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc [root@master affi]# [root@master affi]# kubectl describe node node1 | grep Taints Taints: check=MyCheck:NoExecute [root@master affi]#
删除污点,删除一个node2中的deploytment启动的pod ,有重新调度到node1上了
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@master affi]# kubectl taint nodes node1 check=MyCheck:NoExecute- node/node1 untainted [root@master affi]# kubectl describe node node1 | grep Taints Taints: <none> [root@master affi]# [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS affinity-pod-1 1/1 Running 0 32m 10.244.2.96 node2 <none> <none> app=pod-1 affinity-pod-3 1/1 Running 0 32m 10.244.2.97 node2 <none> <none> app=pod-3 nginx-dm-56996c5fdc-ddk2t 1/1 Running 0 9m25s 10.244.2.98 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-n88vt 1/1 Running 0 35m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc [root@master affi]# kubectl delete pod nginx-dm-56996c5fdc-ddk2t pod "nginx-dm-56996c5fdc-ddk2t" deleted [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS affinity-pod-1 1/1 Running 0 32m 10.244.2.96 node2 <none> <none> app=pod-1 affinity-pod-3 1/1 Running 0 32m 10.244.2.97 node2 <none> <none> app=pod-3 nginx-dm-56996c5fdc-n88vt 1/1 Running 0 36m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-nqthb 1/1 Running 0 2s 10.244.1.130 node1 <none> <none> name=nginx,pod-template-hash=56996c5fdc [root@master affi]#
容忍(Tolerations) 设置了污点的 Node 将根据 taint 的 effect:NoSchedule、PreferNoSchedule、NoExecute 和 Pod 之间产生 互斥的关系,Pod 将在一定程度上不会被调度到 Node 上。 但我们可以在 Pod 上设置容忍 ( Toleration ) ,意思 是设置了容忍的 Pod 将可以容忍污点的存在,可以被调度到存在污点的 Node 上
pod.spec.tolerations
1 2 3 4 5 6 7 8 9 10 11 12 13 tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" tolerationSeconds: 3600 - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" - key: "key2" operator: "Exists" effect: "NoSchedule"
其中 key, vaule, effect 要与 Node 上设置的 taint 保持一致
operator 的值为 Exists 将会忽略 value 值
tolerationSeconds 用于描述当 Pod 需要被驱逐时可以在 Pod 上继续保留运行的时间
当不指定 key 值时,表示容忍所有的污点 key: 1 2 tolerations: - operator: "Exists"
当不指定 effect 值时,表示容忍所有的污点作用 1 2 3 olerations: - key: "key" operator: "Exists"
验证 在node1节点打污点
1 kubectl taint nodes node1 check=MyCheck:NoExecute
pod启动设置污点的容忍策略 构造配置文件 affiPod4.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 apiVersion: v1 kind: Pod metadata: name: affinity-pod-1 labels: app: pod-1 spec: containers: - name: with-node-affinity image: 192.168 .16 .110 :20080/stady/myapp:v1 tolerations: - key: "check" operator: "Equal" value: "MyCheck" effect: "NoExecute" tolerationSeconds: 600
可以启动在node1节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@master affi]# kubectl taint nodes node1 check=MyCheck:NoExecute node/node1 tainted [root@master affi]# kubectl describe node node1 | grep Taints Taints: check=MyCheck:NoExecute [root@master affi]# [root@master affi]# kubectl apply -f affiPod4.yaml pod/affinity-pod-1 created [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS affinity-pod-1 1/1 Running 0 3s 10.244.1.132 node1 <none> <none> app=pod-1 nginx-dm-56996c5fdc-kdmjj 1/1 Running 0 39s 10.244.2.102 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-n88vt 1/1 Running 0 55m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc [root@master affi]# ``` ##### 有多个 Master 存在时,防止资源浪费,可以如下设置 ```text kubectl taint nodes Node-Name node-role.kubernetes.io/master=:PreferNoSchedule
表示:这个 Node-Name 节点被标记为主节点,调度器会尽量避免将 Pod 调度到这个节点上,但如果必要,Pod 仍然可以被调度到这里。
固定节点 指定node名称 Pod.spec.nodeName 将 Pod 直接调度到指定的 Node 节点上,会跳过 Scheduler 的调度策略,该匹配规则是强制匹配
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 apiVersion: apps/v1 kind: Deployment metadata: name: skip-scheduler spec: replicas: 5 selector: matchLabels: app: specify-node1 template: metadata: labels: app: specify-node1 spec: nodeName: node1 containers: - name: myapp-web image: 192.168 .16 .110 :20080/stady/myapp:v1 ports: - containerPort: 80
5个副本全部执行的node1节点
1 2 3 4 5 6 7 8 9 10 11 12 [root@master affi]# kubectl apply -f affiPod5.yaml deployment.apps/skip-scheduler created [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx-dm-56996c5fdc-kdmjj 1/1 Running 0 15m 10.244.2.102 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-n88vt 1/1 Running 0 71m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc skip-scheduler-849bbf848f-4z9fj 1/1 Running 0 3s 10.244.1.136 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-frcfz 1/1 Running 0 3s 10.244.1.137 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-jb7d5 1/1 Running 0 3s 10.244.1.134 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-vrj5m 1/1 Running 0 3s 10.244.1.133 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-xtgk8 1/1 Running 0 3s 10.244.1.135 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f [root@master affi]#
指定标签 Pod.spec.nodeSelector:通过 kubernetes 的 label-selector 机制选择节点,由调度器调度策略匹配 label,而后调度 Pod 到目标节点,该匹配规则属于强制约束
对node2节点打标签
1 kubectl label nodes node2 type=backEndNode2
构造配置文件 affiPod6.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 apiVersion: apps/v1 kind: Deployment metadata: name: node-selector-pod spec: replicas: 5 selector: matchLabels: app: specify-label template: metadata: labels: app: specify-label spec: nodeSelector: type: backEndNode2 containers: - name: myapp-web image: 192.168 .16 .110 :20080/stady/myapp:v1 ports: - containerPort: 80
5个pod副本只会在打了标签的节点上
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@master affi]# kubectl get node node2 --show-labels NAME STATUS ROLES AGE VERSION LABELS node2 Ready <none> 47d v1.28.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux,type=backEndNode2 [root@master affi]# [root@master affi]# kubectl apply -f affiPod6.yaml deployment.apps/node-selector-pod created [root@master affi]# kubectl get pod -o wide --show-labels NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS nginx-dm-56996c5fdc-kdmjj 1/1 Running 0 22m 10.244.2.102 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc nginx-dm-56996c5fdc-n88vt 1/1 Running 0 77m 10.244.2.93 node2 <none> <none> name=nginx,pod-template-hash=56996c5fdc node-selector-pod-75c6ff5d4b-bwhnd 1/1 Running 0 22s 10.244.2.106 node2 <none> <none> app=specify-label,pod-template-hash=75c6ff5d4b node-selector-pod-75c6ff5d4b-ttzrm 1/1 Running 0 22s 10.244.2.103 node2 <none> <none> app=specify-label,pod-template-hash=75c6ff5d4b node-selector-pod-75c6ff5d4b-vrxs4 1/1 Running 0 22s 10.244.2.104 node2 <none> <none> app=specify-label,pod-template-hash=75c6ff5d4b node-selector-pod-75c6ff5d4b-w2gf4 1/1 Running 0 22s 10.244.2.105 node2 <none> <none> app=specify-label,pod-template-hash=75c6ff5d4b node-selector-pod-75c6ff5d4b-xf5fn 1/1 Running 0 22s 10.244.2.107 node2 <none> <none> app=specify-label,pod-template-hash=75c6ff5d4b skip-scheduler-849bbf848f-4z9fj 1/1 Running 0 6m56s 10.244.1.136 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-frcfz 1/1 Running 0 6m56s 10.244.1.137 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-jb7d5 1/1 Running 0 6m56s 10.244.1.134 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-vrj5m 1/1 Running 0 6m56s 10.244.1.133 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f skip-scheduler-849bbf848f-xtgk8 1/1 Running 0 6m56s 10.244.1.135 node1 <none> <none> app=specify-node1,pod-template-hash=849bbf848f [root@master affi]#
集群安全 Kubernetes 作为一个分布式集群的管理工具,保证集群的安全性是其一个重要的任务。API Server 是集群内部各个组件通信的中介,也是外部控制的入口。所以 Kubernetes 的安全机制基本就是围绕保护 API Server 来设计的。Kubernetes 使用了认证(Authentication)、鉴权(Authorization)、准入控制(Admission Control)三步来保证API Server的安全
认证 Authentication
HTTP Token 认证:通过一个 Token 来识别合法用户
HTTP Token 的认证是用一个很长的特殊编码方式的并且难以被模仿的字符串 - Token 来表达客户的一种方式。Token 是一个很长的很复杂的字符串,每一个 Token 对应一个用户名存储在 API Server 能访问的文件中。当客户端发起 API 调用请求时,需要在 HTTP Header 里放入 Token
HTTP Base 认证:通过 用户名+密码 的方式认证
用户名+:+密码 用 BASE64 算法进行编码后的字符串放在 HTTP Request 中的 Heather Authorization 域里发送给服务端,服务端收到后进行编码,获取用户名及密码
最严格的 HTTPS 证书认证:基于 CA 根证书签名的客户端身份认证方式
HTTPS 证书认证:
需要认证的节点:
两种类型
Kubenetes 组件对 API Server 的访问:kubectl、Controller Manager、Scheduler、kubelet、kube-proxy
Kubernetes 管理的 Pod 对容器的访问:Pod(dashborad 也是以 Pod 形式运行)
安全性说明
Controller Manager、Scheduler 与 API Server 在同一台机器,所以直接使用 API Server 的非安全端口访问,–insecure-bind-address=127.0.0.1 (图中 绿色区域内部访问)
kubectl、kubelet、kube-proxy 访问 API Server 就都需要证书进行 HTTPS 双向认证 (图中 以蓝色线条 与 蓝色线条)
证书颁发
手动签发:通过 k8s 集群的跟 ca 进行签发 HTTPS 证书
自动签发:kubelet 首次访问 API Server 时,使用 token 做认证,通过后,Controller Manager 会为 kubelet 生成一个证书,以后的访问都是用证书做认证了
kubeconfig kubeconfig 文件包含集群参数(CA证书、API Server地址),客户端参数(上面生成的证书和私钥),集群context 信息(集群名称、用户名)。Kubenetes 组件通过启动时指定不同的 kubeconfig 文件可以切换到不同的集群
ServiceAccount Pod中的容器访问API Server。因为Pod的创建、销毁是动态的,所以要为它手动生成证书就不可行了。 Kubenetes使用了Service Account解决Pod 访问API Server的认证问题 (图中 蓝色区域的应用访问 API Service 对应的黑色线条)
举例:查看 flannel pod内的 SA 信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@master ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE ... ingress-nginx ingress-nginx-admission-patch-d9ghq 0/1 Completed 0 3d23h ingress-nginx ingress-nginx-controller-749f794b9-hd862 1/1 Running 0 24h kube-flannel kube-flannel-ds-b55zx 1/1 Running 1 (22h ago) 23h kube-flannel kube-flannel-ds-c22v9 1/1 Running 18 (22h ago) 48d kube-flannel kube-flannel-ds-vv4c8 1/1 Running 22 (22h ago) 48d ... [root@master ~]# kubectl exec -it kube-flannel-ds-b55zx -n kube-flannel -- ls -l /run/secrets/kubernetes.io/serviceaccount Defaulted container "kube-flannel" out of: kube-flannel, install-cni-plugin (init), install-cni (init) total 0 lrwxrwxrwx 1 root root 13 Dec 27 12:36 ca.crt -> ..data/ca.crt lrwxrwxrwx 1 root root 16 Dec 27 12:36 namespace -> ..data/namespace lrwxrwxrwx 1 root root 12 Dec 27 12:36 token -> ..data/token [root@master ~]#
Secret 与 SA 的关系 Kubernetes 设计了一种资源对象叫做 Secret,分为两类,一种是用于 ServiceAccount 的 service-account token, 另一种是用于保存用户自定义保密信息的 Opaque。ServiceAccount 中用到包含三个部分:Token、 ca.crt、namespace
token是使用 API Server 私钥签名的 JWT。用于访问API Server时,Server端认证
ca.crt,根证书。用于Client端验证API Server发送的证书
namespace, 标识这个service-account-token的作用域名空间
1 2 kubectl get secret --all-namespaces kubectl describe secret default-token-5gm9r --namespace=kube-system
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 [root@master ~]# kubectl describe secret ingress-nginx-admission --namespace=ingress-nginx Name: ingress-nginx-admission Namespace: ingress-nginx Labels: <none> Annotations: <none> Type: Opaque Data ==== key: 227 bytes ca: 570 bytes cert: 660 bytes [root@master ~]# kubectl get secret ingress-nginx-admission --namespace=ingress-nginx -o jsonpath="{.data['key']}" | base64 --decode -----BEGIN EC PRIVATE KEY----- MHcCAQEEIIRmloegSsnHKnVG8EviOuLSJCFU+kW/ksKSk4XrpTLUoAoGCCqGSM49 AwEHoUQDQgAE9HtEfcJG1poY3ugmrtb2SQseYZ3eU98JwTK3w4VV8VJCBa86p+NQ Nxf/FcXZiCx0VlMU+x1fvx6WNmq8ZNx95g== -----END EC PRIVATE KEY----- [root@master ~]# kubectl get secret ingress-nginx-admission --namespace=ingress-nginx -o jsonpath="{.data['ca']}" | base64 --decode -----BEGIN CERTIFICATE----- MIIBdzCCARygAwIBAgIRAN5zGMUtB1SQ7U8nNI+T+i4wCgYIKoZIzj0EAwIwDzEN MAsGA1UEChMEbmlsMTAgFw0yNDEyMjMxNDM0MzVaGA8yMTI0MTEyOTE0MzQzNVow DzENMAsGA1UEChMEbmlsMTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABIdqaStn IxpYfxl08/Rtw/UBkfHJtod8yCGNAV2yTyLAUryGNPGtnqqG1sSw0lrtvNUl4gA+ ghDw7fP9UtqSPiCjVzBVMA4GA1UdDwEB/wQEAwICBDATBgNVHSUEDDAKBggrBgEF BQcDATAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBT/KmBFIkWJGImJfHW9gmlL DBjSYTAKBggqhkjOPQQDAgNJADBGAiEA/ceuvC+sfaKaJyo03VJ3MA1Esqu0UOs0 Bn3U/Axf6WMCIQDaqfgWhPtP4uJzJUXlbdtxp/rx56Ap9RBSfVuc+AtuRg== -----END CERTIFICATE----- [root@master ~]# kubectl get secret ingress-nginx-admission --namespace=ingress-nginx -o jsonpath="{.data['cert']}" | base64 --decode -----BEGIN CERTIFICATE----- MIIBujCCAWCgAwIBAgIQXqMj0ZvHF6ryLHVDvjVbqDAKBggqhkjOPQQDAjAPMQ0w CwYDVQQKEwRuaWwxMCAXDTI0MTIyMzE0MzQzNVoYDzIxMjQxMTI5MTQzNDM1WjAP MQ0wCwYDVQQKEwRuaWwyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE9HtEfcJG 1poY3ugmrtb2SQseYZ3eU98JwTK3w4VV8VJCBa86p+NQNxf/FcXZiCx0VlMU+x1f vx6WNmq8ZNx95qOBmzCBmDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYB BQUHAwEwDAYDVR0TAQH/BAIwADBjBgNVHREEXDBagiJpbmdyZXNzLW5naW54LWNv bnRyb2xsZXItYWRtaXNzaW9ugjRpbmdyZXNzLW5naW54LWNvbnRyb2xsZXItYWRt aXNzaW9uLmluZ3Jlc3Mtbmdpbnguc3ZjMAoGCCqGSM49BAMCA0gAMEUCIEaqMkzx Y3/X60/GB/D26qZMEEZZOfQifbS5kzYz+hs8AiEAgyItgq5Is5sCtZykp7Akjba6 xkQjRMcMSY67OhZU/Lo= -----END CERTIFICATE----- [root@master ~]#
默认情况下,每个 namespace 都会有一个 ServiceAccount,如果 Pod 在创建时没有指定 ServiceAccount, 就会使用 Pod 所属的 namespace 的 ServiceAccount
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [root@master ~]# kubectl get ServiceAccount -A NAMESPACE NAME SECRETS AGE default default 0 48d ingress-nginx default 0 3d23h ingress-nginx ingress-nginx 0 3d23h ingress-nginx ingress-nginx-admission 0 3d23h kube-flannel default 0 48d kube-flannel flannel 0 48d kube-node-lease default 0 48d kube-public default 0 48d kube-system attachdetach-controller 0 48d kube-system bootstrap-signer 0 48d kube-system certificate-controller 0 48d kube-system clusterrole-aggregation-controller 0 48d kube-system coredns 0 48d kube-system cronjob-controller 0 48d kube-system daemon-set-controller 0 48d kube-system default 0 48d kube-system deployment-controller 0 48d kube-system disruption-controller 0 48d kube-system endpoint-controller 0 48d kube-system endpointslice-controller 0 48d kube-system endpointslicemirroring-controller 0 48d kube-system ephemeral-volume-controller 0 48d kube-system expand-controller 0 48d kube-system generic-garbage-collector 0 48d kube-system horizontal-pod-autoscaler 0 48d kube-system job-controller 0 48d kube-system kube-proxy 0 48d kube-system namespace-controller 0 48d kube-system node-controller 0 48d kube-system persistent-volume-binder 0 48d kube-system pod-garbage-collector 0 48d kube-system pv-protection-controller 0 48d kube-system pvc-protection-controller 0 48d kube-system replicaset-controller 0 48d kube-system replication-controller 0 48d kube-system resourcequota-controller 0 48d kube-system root-ca-cert-publisher 0 48d kube-system service-account-controller 0 48d kube-system service-controller 0 48d kube-system statefulset-controller 0 48d kube-system token-cleaner 0 48d kube-system ttl-after-finished-controller 0 48d kube-system ttl-controller 0 48d
总结
鉴权 Authorization
上面认证过程,只是确认通信的双方都确认了对方是可信的,可以相互通信。而鉴权是确定请求方有哪些资源的权 限。API Server 目前支持以下几种授权策略 (通过 API Server 的启动参数 “–authorization-mode” 设置)
AlwaysDeny:表示拒绝所有的请求,一般用于测试
AlwaysAllow:允许接收所有请求,如果集群不需要授权流程,则可以采用该策略
ABAC(Attribute-Based Access Control):基于属性的访问控制,表示使用用户本地文件配置策略的授权规则对用户请求进行匹配和控制.ABAC 模式在 Kubernetes v1.22 中被标记为废弃,并在 v1.25 中被移除。
Webbook:通过调用外部 REST 服务对用户进行授权
RBAC(Role-Based Access Control):基于角色的访问控制,现行默认规则
Node:节点授权是一种特殊用途的授权模式,专门对 kubelet 发出的 API 请求执行授权。
RBAC 授权模式 RBAC(Role-Based Access Control)基于角色的访问控制,在 Kubernetes 1.5 中引入,现行版本成为默认标准。相对其它访问控制方式,拥有以下优势:
对集群中的资源和非资源均拥有完整的覆盖
整个 RBAC 完全由几个 API 对象完成,同其它 API 对象一样,可以用 kubectl 或 API 进行操作
可以在运行时进行调整,无需重启 API Server
RBAC 的 API 资源对象说明 RBAC 引入了 4 个新的顶级资源对象:Role、ClusterRole、RoleBinding、ClusterRoleBinding,4 种对象类型均可以通过 kubectl 与 API 操作
需要注意的是 Kubenetes 并不会提供用户管理,那么 User、Group、ServiceAccount 指定的用户又是从哪里来的呢?
Kubenetes 组件(kubectl、kube-proxy)或是其他自定义的用户在向 CA 申请证书时,需要提供一个证书请求文件 API Server会把客户端证书的 CN 字段作为User,把 names.O 字段作为Group
证书内容示例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "CN" : "admin" , "hosts" : [ ] , "key" : { "algo" : "rsa" , "size" : 2048 } , "names" : [ { "C" : "CN" , "ST" : "HangZhou" , "L" : "XS" , "O" : "system:masters" , "OU" : "System" } ] }
kubelet 使用 TLS Bootstaping 认证时, API Server 可以使用 Bootstrap Tokens 或者 Token authentication file 验证 token, 无论哪一种,Kubenetes 都会为 token 绑定一个默认的 User 和 Group
Bootstrap Token Secret的示例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 apiVersion: v1 kind: Secret metadata: name: bootstrap-token-07401b namespace: kube-system type: bootstrap.kubernetes.io/token stringData: description: "The default bootstrap token generated by 'kubeadm init'." token-id: 07401b token-secret: f395accd246ae52d expiration: 2017-03-10T03:22:11Z usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:worker,system:bootstrappers:ingress
在这个样例中,token-id 是 07401b,token-secret 是 f395accd246ae52d,并且这个 Token 被分配到了 system:bootstrappers:worker 和 system:bootstrappers:ingress 组
Pod使用 ServiceAccount 认证时,service-account-token 中的 JWT 会保存 User 信息
有了用户信息,再创建一对角色/角色绑定(集群角色/集群角色绑定)资源对象,就可以完成权限绑定了
Role and ClusterRole 在 RBAC API 中,Role 表示一组规则权限,权限只会增加(累加权限),不存在一个资源一开始就有很多权限而通过 RBAC 对其进行减少的操作; Role 可以定义在一个 namespace 中,如果想要跨 namespace 则可以创建 ClusterRole.
Role 构造配置文件 3aRole.yaml
1 2 3 4 5 6 7 8 9 kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-reader rules: - apiGroups: ["" ] resources: ["pods" ] verbs: ["get" , "watch" , "list" ]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@master 3a]# kubectl apply -f 3aRole.yaml role.rbac.authorization.k8s.io/pod-reader created [root@master 3a]# kubectl get role NAME CREATED AT pod-reader 2024-12-27T16:13:35Z [root@master 3a]# kubectl describe role pod-reader Name: pod-reader Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods [] [] [get watch list] [root@master 3a]#
apiGroups 可以使用命令查看
NAME:资源的名称,例如 pods、services、deployments 等。
SHORTNAMES:资源的缩写名称,用于在 kubectl 命令中作为资源名称的简写形式。例如,pods 的简写可以是 po,services 的简写可以是 svc。
APIVERSION:资源所属的 API 版本和组,例如 v1、apps/v1、batch/v1 等。这个字段显示了资源的 API 路径和版本信息,其中包含了组信息,如 apps/v1 中的 apps 就是 API 组名
NAMESPACED:指示资源是否属于命名空间。值为 true 表示资源是命名空间级别的,即它们存在于特定的命名空间中;值为 false 表示资源是集群级别的,即它们不属于任何特定的命名空间,而是在整个 Kubernetes 集群中全局存在的。
KIND:资源的类别,对应于 Go 语言中定义的资源类型。例如,Pod 的 KIND 就是 Pod,Service 的 KIND 就是 Service。
apiGroups 的信息可以通过 kubectl api-versions 命令查看所有可用的 API 组和版本 这些 API 组分为核心组(Core Group)和命名组(Named Groups):
核心组(Core Group):也称为 legacy 组,通过 /api/v1 访问,包括一些基本资源,如 Pods、Services 和 Namespaces
命名组(Named Groups):覆盖核心组之外的特定领域,如 apps 用于应用程序相关的资源,batch 用于批处理任务,extensions 用于额外的特性
在 Kubernetes 中,verbs 字段定义了可以对资源执行的操作。以下是 verbs 字段可选值的列表:
1 2 3 4 5 6 7 8 9 10 create:创建资源。 delete:删除资源。 deletecollection:删除资源集合。 get:读取单个资源。 list:列出资源。 patch:部分更新资源。 update:更新资源。 watch:监控资源变化。 proxy:为资源创建代理。 connect:连接到资源(用于 WebSocket 等)。
ClusterRole ClusterRole 具有与 Role 相同的权限角色控制能力,不同的是 ClusterRole 是集群级别的,ClusterRole 可以用 于:
集群级别的资源控制( 例如 node 访问权限 )
非资源型 endpoints( 例如 /healthz 访问 )
所有命名空间资源控制(例如 pods )
构造配置文件 3aClusterRole.yaml
1 2 3 4 5 6 7 8 9 kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: secret-reader rules: - apiGroups: ["" ] resources: ["secrets" ] verbs: ["get" , "watch" , "list" ]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@master 3a]# kubectl apply -f 3aClusterRole.yaml clusterrole.rbac.authorization.k8s.io/secret-reader created [root@master 3a]# kubectl get clusterroles NAME CREATED AT secret-reader 2024-12-27T16:16:25Z ... ... [root@master 3a]# kubectl describe clusterroles secret-reader Name: secret-reader Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- secrets [] [] [get watch list] [root@master 3a]#
RoleBinding and ClusterRoleBinding RoleBinding RoloBinding 可以将角色中定义的权限授予用户或用户组,RoleBinding 包含一组权限列表(subjects),权限列 表中包含有不同形式的待授予权限资源类型(users, groups, or service accounts);RoloBinding 同样包含对被 Bind 的 Role 引用;RoleBinding 适用于某个命名空间内授权,而 ClusterRoleBinding 适用于集群范围内的授 权
将 default 命名空间的 pod-reader Role 授予 jane 用户,此后 jane 用户在 default 命名空间中将具有 pod reader 的权限
构造配置文件 3aRoleBinding.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-pods namespace: default subjects: - kind: User name: jane apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@master 3a]# kubectl apply -f 3aRoleBinding1.yaml rolebinding.rbac.authorization.k8s.io/read-pods created [root@master 3a]# kubectl get rolebindings NAME ROLE AGE read-pods Role/pod-reader 10s [root@master 3a]# kubectl describe rolebindings read-pods Name: read-pods Labels: <none> Annotations: <none> Role: Kind: Role Name: pod-reader Subjects: Kind Name Namespace ---- ---- --------- User jane [root@master 3a]#
RoleBinding 同样可以引用 ClusterRole 来对当前 namespace 内用户、用户组或 ServiceAccount 进行授权, 这种操作允许集群管理员在整个集群内定义一些通用的 ClusterRole,然后在不同的 namespace 中使用 RoleBinding 来引用
例如,以下 RoleBinding 引用了一个 ClusterRole,这个 ClusterRole 具有整个集群内对 secrets 的访问权限; 但是其授权用户 dave 只2能访问 test-ns 空间中的 secrets(因为 RoleBinding 定义在 test-ns 命 名空间)
提前准备好测试命名空间.
1 2 [root@master 3a]# kubectl create ns test-ns namespace/test-ns created
构造配置文件 3aRoleBinding2.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-secrets namespace: test-ns subjects: - kind: User name: dave apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io
1 2 3 4 5 [root@master 3a]# kubectl apply -f 3aRoleBinding2.yaml rolebinding.rbac.authorization.k8s.io/read-secrets created [root@master 3a]# kubectl get rolebindings -n test-ns NAME ROLE AGE read-secrets ClusterRole/secret-reader 2m6s
ClusterRoleBinding 使用 ClusterRoleBinding 可以对整个集群中的所有命名空间资源权限进行授权;以下 ClusterRoleBinding 样例 展示了授权 manager 组内所有用户在全部命名空间中对 secrets 进行访问
构造配置文件 3aClusterRoleBinding.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: read-secrets-global subjects: - kind: Group name: manager apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@master 3a]# kubectl apply -f 3aClusterRoleBinding.yaml clusterrolebinding.rbac.authorization.k8s.io/read-secrets-global created [root@master 3a]# [root@master 3a]# kubectl get clusterrolebindings read-secrets-global NAME ROLE AGE read-secrets-global ClusterRole/secret-reader 63s [root@master 3a]# kubectl describe clusterrolebindings read-secrets-global Name: read-secrets-global Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: secret-reader Subjects: Kind Name Namespace ---- ---- --------- Group manager [root@master 3a]#
Resources Kubernetes 集群内一些资源一般以其名称字符串来表示,这些字符串一般会在 API 的 URL 地址中出现;同时某些 资源也会包含子资源,例如 logs 资源就属于 pods 的子资源,API 中 URL 样例如下
1 GET /api/v1/namespaces/{namespace}/pods/{name}/log
如果要在 RBAC 授权模型中控制这些子资源的访问权限,可以通过 / 分隔符来实现,以下是一个定义 pods 资资源 logs 访问权限的 Role 定义样例
构造配置文件 3aResources.yaml
1 2 3 4 5 6 7 8 9 kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: pod-and-pod-logs-reader rules: - apiGroups: ["" ] resources: ["pods/log" ] verbs: ["get" , "list" ]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@master 3a]# kubectl apply -f 3aResources.yaml role.rbac.authorization.k8s.io/pod-and-pod-logs-reader created [root@master 3a]# [root@master 3a]# kubectl get Role NAME CREATED AT pod-and-pod-logs-reader 2024-12-27T16:32:09Z pod-reader 2024-12-27T16:13:35Z [root@master 3a]# kubectl describe Role pod-reader Name: pod-reader Labels: <none> Annotations: <none> PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods [] [] [get watch list] [root@master 3a]#
to Subjects RoleBinding 和 ClusterRoleBinding 可以将 Role 绑定到 Subjects;Subjects 可以是 groups、users 或者 service accounts
Subjects 中 Users 使用字符串表示,它可以是一个普通的名字字符串,如 “alice”;也可以是 email 格式的邮箱 地址,如 “wangyanglinux@163.com ”;甚至是一组字符串形式的数字 ID 。但是 Users 的前缀 system: 是系统 保留的,集群管理员应该确保普通用户不会使用这个前缀格式
Groups 书写格式与 Users 相同,都为一个字符串,并且没有特定的格式要求;同样 system: 前缀为系统保留
实践:创建一个用户只能管理 test-ns 空间 下载证书生成工具 官网地址下载
1 2 3 4 5 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
在内网服务器下载
1 2 3 4 5 wget http://192.168.16.110:9080/other/cfssl_linux-amd64 wget http://192.168.16.110:9080/other/cfssljson_linux-amd64 wget http://192.168.16.110:9080/other/cfssl-certinfo_linux-amd64
增加执行权限并放置在PATH相关的目录下
1 2 3 4 chmod +x cfssl* mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
创建证书所需要的json 创建文件 devuser-csr.json
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "CN" : "devuser" , "hosts" : [ ] , "key" : { "algo" : "rsa" , "size" : 2048 } , "names" : [ { "C" : "CN" , "ST" : "BeiJing" , "L" : "BeiJing" , "O" : "k8s" , "OU" : "System" } ] }
根据配置文件创建证书 1 cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -profile=kubernetes ./devuser-csr.json | cfssljson -bare devuser
生成三个文件 分别是 devuser.csr devuser-key.pem devuser.pem
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@master 3a]# cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -profile=kubernetes ./devuser-csr.json | cfssljson -bare devuser 2024/12/28 21:58:37 [INFO] generate received request 2024/12/28 21:58:37 [INFO] received CSR 2024/12/28 21:58:37 [INFO] generating key: rsa-2048 2024/12/28 21:58:38 [INFO] encoded CSR 2024/12/28 21:58:38 [INFO] signed certificate with serial number 42447384161711716719600493441407005234353879856 2024/12/28 21:58:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@master 3a]# ls -l 总用量 40 -rw-r--r-- 1 root root 267 12月 28 00:26 3aClusterRoleBinding.yaml -rw-r--r-- 1 root root 237 12月 28 00:16 3aClusterRole.yaml -rw-r--r-- 1 root root 191 12月 28 00:32 3aResources.yaml -rw-r--r-- 1 root root 261 12月 28 00:19 3aRoleBinding1.yaml -rw-r--r-- 1 root root 341 12月 28 00:22 3aRoleBinding2.yaml -rw-r--r-- 1 root root 217 12月 28 00:13 3aRole.yaml -rw-r--r-- 1 root root 997 12月 28 21:58 devuser.csr -rw-r--r-- 1 root root 220 12月 28 21:51 devuser-csr.json -rw------- 1 root root 1679 12月 28 21:58 devuser-key.pem -rw-r--r-- 1 root root 1281 12月 28 21:58 devuser.pem [root@master 3a]#
设置集群参数 1 2 3 4 5 6 7 8 export KUBE_APISERVER="https://192.168.16.200:6443" kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.crt \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=devuser.kubeconfig
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [root@master 3a]# export KUBE_APISERVER="https://192.168.16.200:6443" [root@master 3a]# kubectl config set-cluster kubernetes \ > --certificate-authority=/etc/kubernetes/pki/ca.crt \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=devuser.kubeconfig Cluster "kubernetes" set. [root@master 3a]# ls -l 总用量 44 -rw-r--r-- 1 root root 267 12月 28 00:26 3aClusterRoleBinding.yaml -rw-r--r-- 1 root root 237 12月 28 00:16 3aClusterRole.yaml -rw-r--r-- 1 root root 191 12月 28 00:32 3aResources.yaml -rw-r--r-- 1 root root 261 12月 28 00:19 3aRoleBinding1.yaml -rw-r--r-- 1 root root 341 12月 28 00:22 3aRoleBinding2.yaml -rw-r--r-- 1 root root 217 12月 28 00:13 3aRole.yaml -rw-r--r-- 1 root root 997 12月 28 21:58 devuser.csr -rw-r--r-- 1 root root 220 12月 28 21:51 devuser-csr.json -rw------- 1 root root 1679 12月 28 21:58 devuser-key.pem -rw------- 1 root root 1680 12月 28 22:10 devuser.kubeconfig -rw-r--r-- 1 root root 1281 12月 28 21:58 devuser.pem [root@master 3a]# cat devuser.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQmJjYlRONDRGMGd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1Ea3hNRFV5TWpkYUZ3MHpOREV4TURjeE1EVTNNamRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNhMUE2aklIK2o1cE5jbFFDck9BbGJwbmpxV3ZSUWxUQlBocmE1VEdQNEdSUkcwYkFmOXRKQmV1ZW8KbnhscG9GM2NOcjJaQUJteVlNNTJaQnlHM3dZckNIdzdrcitmNC90aVJPeEZhdC9vd2syK01PYjdDaStIWVJ4RgoyVlRYM0IrcU1uaTU1bzM3RlIwcVVYb2xjYVEyMlBrTml4SGZtdlp4RVVmcHVtTXFtbTFrdVVFdTJxS0tjbFpwCmNSNHRMQmdYR1NqQkNzdFNhTXd0THhhOXNUM0JMcnJCK1JFWWpWWHQ2eWcwQzZvdCtEZVp2dElTZHQyMXVPazkKOVNJMXJHa0NMd3hrZGI3UnlqV20xS3liaERFdW05aFRUT1pUbEdKY090ZFhYT3pTU3k5TXpEbkVRWEN3MngveApWazUzTk5Cb2lrSUlPZ2h3amlUV2RocVZvWURQQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUbXJGdWIyS3dKTnIyRXBBOVhQajQ2bWhnejZUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUdQVzRlMUVFUAoxY2N6YlA2OWFwL1VrMGxLZmx0dXA3NEVyRGdhV1J1QXFxb1RBOTdNdnc0Qm53QWVYUkM0M1ZHSG1VWHM1TTdECi9DWUd1SWxXRVlhRGhtMmxzZzVnYVl3dDEyeTJjVDh6YkgrRno4ODBQQlNrbENEUmVuZUlReXNrc3hkdFFhNjUKcG9FVWFZalJOVzFWN1lxcUEweEpKQUtrWXZXVDhZM09ua2QwQkx1WWc2RjcyaXlMQ2xERmFINWFmSnZnOXh3cApBTDVYdHJuOERUajhwU0NFc2tFalh6UE14YnhISkpheTVwbUNLZTF5TDJNalRjZDFmcGYzUmN2Vks0RXd6UzlqCkliTXE4bktaV1pTYWlNd3pmdlVLa2RWSjY4SWY0aUJuQlVseEhJLzUzQ29SN2hGTFJYYzVJeEwyWG0vckJNOXAKaTdoL1BTdHBFelp2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.16.200:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users: null [root@master 3a]#
设置客户端认证参数 将刚才创建的证书/私钥 设置为 客户端的证书/私钥
1 2 3 4 5 kubectl config set-credentials devuser \ --client-certificate=./devuser.pem \ --client-key=./devuser-key.pem \ --embed-certs=true \ --kubeconfig=devuser.kubeconfig
可以看到刚才空置的参数users 中增加了用户 devuser 的 证书/私钥 信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@master 3a]# kubectl config set-credentials devuser \ > --client-certificate=./devuser.pem \ > --client-key=./devuser-key.pem \ > --embed-certs=true \ > --kubeconfig=devuser.kubeconfig User "devuser" set. [root@master 3a]# cat devuser.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQmJjYlRONDRGMGd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1Ea3hNRFV5TWpkYUZ3MHpOREV4TURjeE1EVTNNamRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNhMUE2aklIK2o1cE5jbFFDck9BbGJwbmpxV3ZSUWxUQlBocmE1VEdQNEdSUkcwYkFmOXRKQmV1ZW8KbnhscG9GM2NOcjJaQUJteVlNNTJaQnlHM3dZckNIdzdrcitmNC90aVJPeEZhdC9vd2syK01PYjdDaStIWVJ4RgoyVlRYM0IrcU1uaTU1bzM3RlIwcVVYb2xjYVEyMlBrTml4SGZtdlp4RVVmcHVtTXFtbTFrdVVFdTJxS0tjbFpwCmNSNHRMQmdYR1NqQkNzdFNhTXd0THhhOXNUM0JMcnJCK1JFWWpWWHQ2eWcwQzZvdCtEZVp2dElTZHQyMXVPazkKOVNJMXJHa0NMd3hrZGI3UnlqV20xS3liaERFdW05aFRUT1pUbEdKY090ZFhYT3pTU3k5TXpEbkVRWEN3MngveApWazUzTk5Cb2lrSUlPZ2h3amlUV2RocVZvWURQQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUbXJGdWIyS3dKTnIyRXBBOVhQajQ2bWhnejZUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUdQVzRlMUVFUAoxY2N6YlA2OWFwL1VrMGxLZmx0dXA3NEVyRGdhV1J1QXFxb1RBOTdNdnc0Qm53QWVYUkM0M1ZHSG1VWHM1TTdECi9DWUd1SWxXRVlhRGhtMmxzZzVnYVl3dDEyeTJjVDh6YkgrRno4ODBQQlNrbENEUmVuZUlReXNrc3hkdFFhNjUKcG9FVWFZalJOVzFWN1lxcUEweEpKQUtrWXZXVDhZM09ua2QwQkx1WWc2RjcyaXlMQ2xERmFINWFmSnZnOXh3cApBTDVYdHJuOERUajhwU0NFc2tFalh6UE14YnhISkpheTVwbUNLZTF5TDJNalRjZDFmcGYzUmN2Vks0RXd6UzlqCkliTXE4bktaV1pTYWlNd3pmdlVLa2RWSjY4SWY0aUJuQlVseEhJLzUzQ29SN2hGTFJYYzVJeEwyWG0vckJNOXAKaTdoL1BTdHBFelp2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.16.200:6443 name: kubernetes contexts: null current-context: "" kind: Config preferences: {} users: - name: devuser user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURoRENDQW15Z0F3SUJBZ0lVQjI5b0Vya2k2SU5ZaHFpY0FpNDdwYlRIT3pBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlhKdVpYUmxjekFlRncweU5ERXlNamd4TXpVME1EQmFGdzB5TlRFeQpNamd4TXpVME1EQmFNR0l4Q3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xLYVc1bk1SQXdEZ1lEClZRUUhFd2RDWldsS2FXNW5NUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFUU1BNEcKQTFVRUF4TUhaR1YyZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUw2TQpUcWVibUNjaDM5bXlLbFFNL0drdVQ2ZjRqSk5WUTVNdkNpazI4aTJZaWgxaG5NaHBHeXpxUjFVNEQrRDNpenJLCnBaWXhFbm8wdnRrdzlrZDV3bWtXZ0FXSk5IMzJiZjUxYm5Dd2dqaG0rU0FwWnBNQmNkVU5Wc0hjSGIxdHBWMVgKS3JRUFc0QmovaVZteWttQytIbXVNTjZmaXArRTlkdWNyeFFnck9WdUI1UDBCc21XSE1tNlUxak4wS0YyMTd1RAoxcHZFcmZhZHAwK285czdNV2p4Y0dXdTMrZjMxendwY2d0TDJsMWdBS0VDdHF6b3RYdDB0VzN0ZzFRQ3k5MkpJCk5HampJdGRLdDhzRU1sR0pyd2Z4WExEbndTa3E2VDlKc3JZVkZLMFRpWXVVdUZFdXBaa2c1RmRidXFwdi9vTVAKRGhhVjAwZ1Brb3NFajRkYzlNVUNBd0VBQWFOL01IMHdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CMEdBMVVkSlFRVwpNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjBHQTFVZERnUVdCQlI5ClM5NjF0cWxoQ2EvWTVhZHpnK3FmcEhiRlV6QWZCZ05WSFNNRUdEQVdnQlRtckZ1YjJLd0pOcjJFcEE5WFBqNDYKbWhnejZUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHb01RdURjT0szMGVRWmdPTStCSWMreEp6Tm1BSUVuVwpsVi9MSXBUcW5oWW1RcTFVS3pQTUJ0RGtoeklUaE83ODJvK2grT0Y3c3NHTEtRa0tBNUJuSCt2NUdOVmhwWTl0CnFmS0M3eGFtcXg3dHNHQ1ZCSDU2cjE3eHVHbmJSWDZnS1Rab2RKenJBTWZxbEkxRTNha05OMTVDdnN2R1pZWVgKVXFvaVkzb1c1TFMwNHdueWNzbUhDalRleW13UnhhVm1UazVoWGtxMVZsUEgyNVg5a3dmV2tuNEZtNjhBYXV3aApZRzZXa3NweFNrU2RBcGRyM2N0ektSSDFkQlZxb1lLSFN4amViOHV0MFBhaFh1bEZzNEpqeHh3QVk2MjQrdlhrCmM3UVcxY3YxeGZGVTZzaXU5SDdCOFRUM3NaeXNKbkorRERZN3JoRDJONks4RWhTYkY3TTR6Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdm94T3A1dVlKeUhmMmJJcVZBejhhUzVQcC9pTWsxVkRreThLS1RieUxaaUtIV0djCnlHa2JMT3BIVlRnUDRQZUxPc3FsbGpFU2VqUysyVEQyUjNuQ2FSYUFCWWswZmZadC9uVnVjTENDT0diNUlDbG0Ka3dGeDFRMVd3ZHdkdlcybFhWY3F0QTliZ0dQK0pXYktTWUw0ZWE0dzNwK0tuNFQxMjV5dkZDQ3M1VzRIay9RRwp5WlljeWJwVFdNM1FvWGJYdTRQV204U3Q5cDJuVDZqMnpzeGFQRndaYTdmNS9mWFBDbHlDMHZhWFdBQW9RSzJyCk9pMWUzUzFiZTJEVkFMTDNZa2cwYU9NaTEwcTN5d1F5VVltdkIvRmNzT2ZCS1NycFAwbXl0aFVVclJPSmk1UzQKVVM2bG1TRGtWMXU2cW0vK2d3OE9GcFhUU0ErU2l3U1BoMXoweFFJREFRQUJBb0lCQVFDMmMySmMybjgxK1JsKwpPVHFPZ0dDdjFjZ3Y3YTJzNVZkdTl2dWp1eGpvejhadm02ZWp2Z0JuWVd3c0RTSW5KdUFKeTBBQ0w3cWhpUiswCmwwMDU0enhqbzBleUJVNWR6amhFRGUxUnViRDJrS0s2U09vT21MT0diTjlGZ0o1NVl5T1QzSUxuSmsxWEFtZTMKS0ZWSlRqN2RSQTFISFR4K3diRW9OejdzNXR5bVVLMTA4MzVTRFdzT2RiNm03YXY2SjQ4MmVBb0ZjRWFuM3VrYwo1SEZPaHNZMFRVSmQySVF1c0NNQ1pZMVB1OXdXVDE5NGxueGJpM0JMNmRFQjNobjlOMGhiK3cyRW8yc3FkeEY2CkNYczVocUtIOVdDdzlRUXc4a3lLOC9EMHhsZ0orNitwV0hSSVZpZktxeGdhMktST2hxUHlUN0R5ZFduZUJabDEKbEdOMlRqZ0JBb0dCQU9UeTIrU3ZWN0JlVi8wS3NMQ1NlQ1FkMGdwMDljOUhCUVM1QmR1M3dma3FnUStvZHo3dApoU1JTdmNrRDZjcG1xS0ZvdUtrMUI2eW4yL080aWpvWU5YdjV1d21UK0phVTEreVgxNUk1RkpSSk9CSEhsTkhPCk9ESXZheTRHSnRuSFdZWkhNT0dZN1QyNkpHanpOdFNBcjFlSmIvNmtudnhhQy81L2lpcHlWYUhWQW9HQkFOVVAKN0hQTkxJenhNSzBoclkvbmZZM0VDNXhoMW9YUS9rTEkrUEpwdFU4MGdGalQrOHRhWUdneHBUQmpUL3U4Q0ZQRQppL2E0T1Z3Mnh5dUlLV05RbUZvS2FnTUVQRFI1NjlOOEszNklkVUJJV2hidHJJaVhiZlhkbTI4OS8rNlhsSEVKCkZ0T1ZXM0wyekk5NGtreStJMlppUFNOSGdzeXEreUNwZ3l0M3g0OHhBb0dCQUlkTnk0eUQzNVBZdmJGS3Z3OHIKRUp0dmtERWozQjFxZ0ZuQkt1Z2wyaG54OTZJVVVweTY3R09DRHEwY2hlOWE2aSt4M3VnSThnY2trTVdoZXZkSQpWVnQyUkFZdUQ4eVdIR0d6ZnUvb2tmUHNyWms4VlFRRkZvcjZJU0pxK2t6Y0ZsbFgrMWhuODFUMmpBd0dLSkkvCmx1QnAxZWtzeXRTaU50SnA5M0tNYlhVZEFvR0FWRDVrbHZFa1VXSTRoZXhRRFJ0UitKRHdxbGZCRTg0c0Nzb2UKTFBPQkhoMDdObVF6SmhmSkVNbTRjQ2FFaEp1M2l5K215OW5SekZWWWNTejRlRzF3b0FHSUkwTTBidWRhU0pmTApOcy9MMUt3Ryt4UGs2V2src0QxOGJRTE54RkFwQUh6QWlzNStoemx3YnJZVTJzVS9pQWNGOTRJYUJNVUNZTXJGCnM1VTcwYkVDZ1lCSXBLODJvMGtrbUg4RGhOMDRnU3BPR0p4R0ZKQWtxbWFtZDRWKzNkU3hLb09TUXJDa1huWDEKcXNDYnZtdXlLNm8zb1FJNno0QnA0RG9hbDJNRWtKajRnb3RYY21GTlZBMTBQOEhCUnpjdnNPSWk2cUF3WlNXawp3MmR3TkRsQklFaDlQakFhRTBLQWNQK01LbVozWUNKbTArZ0ZCQUZ4eCtJbVNwR0QzQlNRa0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= [root@master 3a]#
设置指定的上下文参数 增加 context 上下文信息中 包括集群名称, 用户名 , 命名空间
1 2 3 4 5 kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=devuser \ --namespace=test-ns \ --kubeconfig=devuser.kubeconfig
查看配置文件中增加了 对应的context信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [root@master 3a]# kubectl config set-context kubernetes \ > --cluster=kubernetes \ > --user=devuser \ > --namespace=test-ns \ > --kubeconfig=devuser.kubeconfig Context "kubernetes" created. [root@master 3a]# cat devuser.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQmJjYlRONDRGMGd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1Ea3hNRFV5TWpkYUZ3MHpOREV4TURjeE1EVTNNamRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNhMUE2aklIK2o1cE5jbFFDck9BbGJwbmpxV3ZSUWxUQlBocmE1VEdQNEdSUkcwYkFmOXRKQmV1ZW8KbnhscG9GM2NOcjJaQUJteVlNNTJaQnlHM3dZckNIdzdrcitmNC90aVJPeEZhdC9vd2syK01PYjdDaStIWVJ4RgoyVlRYM0IrcU1uaTU1bzM3RlIwcVVYb2xjYVEyMlBrTml4SGZtdlp4RVVmcHVtTXFtbTFrdVVFdTJxS0tjbFpwCmNSNHRMQmdYR1NqQkNzdFNhTXd0THhhOXNUM0JMcnJCK1JFWWpWWHQ2eWcwQzZvdCtEZVp2dElTZHQyMXVPazkKOVNJMXJHa0NMd3hrZGI3UnlqV20xS3liaERFdW05aFRUT1pUbEdKY090ZFhYT3pTU3k5TXpEbkVRWEN3MngveApWazUzTk5Cb2lrSUlPZ2h3amlUV2RocVZvWURQQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUbXJGdWIyS3dKTnIyRXBBOVhQajQ2bWhnejZUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUdQVzRlMUVFUAoxY2N6YlA2OWFwL1VrMGxLZmx0dXA3NEVyRGdhV1J1QXFxb1RBOTdNdnc0Qm53QWVYUkM0M1ZHSG1VWHM1TTdECi9DWUd1SWxXRVlhRGhtMmxzZzVnYVl3dDEyeTJjVDh6YkgrRno4ODBQQlNrbENEUmVuZUlReXNrc3hkdFFhNjUKcG9FVWFZalJOVzFWN1lxcUEweEpKQUtrWXZXVDhZM09ua2QwQkx1WWc2RjcyaXlMQ2xERmFINWFmSnZnOXh3cApBTDVYdHJuOERUajhwU0NFc2tFalh6UE14YnhISkpheTVwbUNLZTF5TDJNalRjZDFmcGYzUmN2Vks0RXd6UzlqCkliTXE4bktaV1pTYWlNd3pmdlVLa2RWSjY4SWY0aUJuQlVseEhJLzUzQ29SN2hGTFJYYzVJeEwyWG0vckJNOXAKaTdoL1BTdHBFelp2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.16.200:6443 name: kubernetes contexts: - context: cluster: kubernetes namespace: test-ns user: devuser name: kubernetes current-context: "" kind: Config preferences: {} users: - name: devuser user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURoRENDQW15Z0F3SUJBZ0lVQjI5b0Vya2k2SU5ZaHFpY0FpNDdwYlRIT3pBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlhKdVpYUmxjekFlRncweU5ERXlNamd4TXpVME1EQmFGdzB5TlRFeQpNamd4TXpVME1EQmFNR0l4Q3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xLYVc1bk1SQXdEZ1lEClZRUUhFd2RDWldsS2FXNW5NUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFUU1BNEcKQTFVRUF4TUhaR1YyZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUw2TQpUcWVibUNjaDM5bXlLbFFNL0drdVQ2ZjRqSk5WUTVNdkNpazI4aTJZaWgxaG5NaHBHeXpxUjFVNEQrRDNpenJLCnBaWXhFbm8wdnRrdzlrZDV3bWtXZ0FXSk5IMzJiZjUxYm5Dd2dqaG0rU0FwWnBNQmNkVU5Wc0hjSGIxdHBWMVgKS3JRUFc0QmovaVZteWttQytIbXVNTjZmaXArRTlkdWNyeFFnck9WdUI1UDBCc21XSE1tNlUxak4wS0YyMTd1RAoxcHZFcmZhZHAwK285czdNV2p4Y0dXdTMrZjMxendwY2d0TDJsMWdBS0VDdHF6b3RYdDB0VzN0ZzFRQ3k5MkpJCk5HampJdGRLdDhzRU1sR0pyd2Z4WExEbndTa3E2VDlKc3JZVkZLMFRpWXVVdUZFdXBaa2c1RmRidXFwdi9vTVAKRGhhVjAwZ1Brb3NFajRkYzlNVUNBd0VBQWFOL01IMHdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CMEdBMVVkSlFRVwpNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjBHQTFVZERnUVdCQlI5ClM5NjF0cWxoQ2EvWTVhZHpnK3FmcEhiRlV6QWZCZ05WSFNNRUdEQVdnQlRtckZ1YjJLd0pOcjJFcEE5WFBqNDYKbWhnejZUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHb01RdURjT0szMGVRWmdPTStCSWMreEp6Tm1BSUVuVwpsVi9MSXBUcW5oWW1RcTFVS3pQTUJ0RGtoeklUaE83ODJvK2grT0Y3c3NHTEtRa0tBNUJuSCt2NUdOVmhwWTl0CnFmS0M3eGFtcXg3dHNHQ1ZCSDU2cjE3eHVHbmJSWDZnS1Rab2RKenJBTWZxbEkxRTNha05OMTVDdnN2R1pZWVgKVXFvaVkzb1c1TFMwNHdueWNzbUhDalRleW13UnhhVm1UazVoWGtxMVZsUEgyNVg5a3dmV2tuNEZtNjhBYXV3aApZRzZXa3NweFNrU2RBcGRyM2N0ektSSDFkQlZxb1lLSFN4amViOHV0MFBhaFh1bEZzNEpqeHh3QVk2MjQrdlhrCmM3UVcxY3YxeGZGVTZzaXU5SDdCOFRUM3NaeXNKbkorRERZN3JoRDJONks4RWhTYkY3TTR6Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdm94T3A1dVlKeUhmMmJJcVZBejhhUzVQcC9pTWsxVkRreThLS1RieUxaaUtIV0djCnlHa2JMT3BIVlRnUDRQZUxPc3FsbGpFU2VqUysyVEQyUjNuQ2FSYUFCWWswZmZadC9uVnVjTENDT0diNUlDbG0Ka3dGeDFRMVd3ZHdkdlcybFhWY3F0QTliZ0dQK0pXYktTWUw0ZWE0dzNwK0tuNFQxMjV5dkZDQ3M1VzRIay9RRwp5WlljeWJwVFdNM1FvWGJYdTRQV204U3Q5cDJuVDZqMnpzeGFQRndaYTdmNS9mWFBDbHlDMHZhWFdBQW9RSzJyCk9pMWUzUzFiZTJEVkFMTDNZa2cwYU9NaTEwcTN5d1F5VVltdkIvRmNzT2ZCS1NycFAwbXl0aFVVclJPSmk1UzQKVVM2bG1TRGtWMXU2cW0vK2d3OE9GcFhUU0ErU2l3U1BoMXoweFFJREFRQUJBb0lCQVFDMmMySmMybjgxK1JsKwpPVHFPZ0dDdjFjZ3Y3YTJzNVZkdTl2dWp1eGpvejhadm02ZWp2Z0JuWVd3c0RTSW5KdUFKeTBBQ0w3cWhpUiswCmwwMDU0enhqbzBleUJVNWR6amhFRGUxUnViRDJrS0s2U09vT21MT0diTjlGZ0o1NVl5T1QzSUxuSmsxWEFtZTMKS0ZWSlRqN2RSQTFISFR4K3diRW9OejdzNXR5bVVLMTA4MzVTRFdzT2RiNm03YXY2SjQ4MmVBb0ZjRWFuM3VrYwo1SEZPaHNZMFRVSmQySVF1c0NNQ1pZMVB1OXdXVDE5NGxueGJpM0JMNmRFQjNobjlOMGhiK3cyRW8yc3FkeEY2CkNYczVocUtIOVdDdzlRUXc4a3lLOC9EMHhsZ0orNitwV0hSSVZpZktxeGdhMktST2hxUHlUN0R5ZFduZUJabDEKbEdOMlRqZ0JBb0dCQU9UeTIrU3ZWN0JlVi8wS3NMQ1NlQ1FkMGdwMDljOUhCUVM1QmR1M3dma3FnUStvZHo3dApoU1JTdmNrRDZjcG1xS0ZvdUtrMUI2eW4yL080aWpvWU5YdjV1d21UK0phVTEreVgxNUk1RkpSSk9CSEhsTkhPCk9ESXZheTRHSnRuSFdZWkhNT0dZN1QyNkpHanpOdFNBcjFlSmIvNmtudnhhQy81L2lpcHlWYUhWQW9HQkFOVVAKN0hQTkxJenhNSzBoclkvbmZZM0VDNXhoMW9YUS9rTEkrUEpwdFU4MGdGalQrOHRhWUdneHBUQmpUL3U4Q0ZQRQppL2E0T1Z3Mnh5dUlLV05RbUZvS2FnTUVQRFI1NjlOOEszNklkVUJJV2hidHJJaVhiZlhkbTI4OS8rNlhsSEVKCkZ0T1ZXM0wyekk5NGtreStJMlppUFNOSGdzeXEreUNwZ3l0M3g0OHhBb0dCQUlkTnk0eUQzNVBZdmJGS3Z3OHIKRUp0dmtERWozQjFxZ0ZuQkt1Z2wyaG54OTZJVVVweTY3R09DRHEwY2hlOWE2aSt4M3VnSThnY2trTVdoZXZkSQpWVnQyUkFZdUQ4eVdIR0d6ZnUvb2tmUHNyWms4VlFRRkZvcjZJU0pxK2t6Y0ZsbFgrMWhuODFUMmpBd0dLSkkvCmx1QnAxZWtzeXRTaU50SnA5M0tNYlhVZEFvR0FWRDVrbHZFa1VXSTRoZXhRRFJ0UitKRHdxbGZCRTg0c0Nzb2UKTFBPQkhoMDdObVF6SmhmSkVNbTRjQ2FFaEp1M2l5K215OW5SekZWWWNTejRlRzF3b0FHSUkwTTBidWRhU0pmTApOcy9MMUt3Ryt4UGs2V2src0QxOGJRTE54RkFwQUh6QWlzNStoemx3YnJZVTJzVS9pQWNGOTRJYUJNVUNZTXJGCnM1VTcwYkVDZ1lCSXBLODJvMGtrbUg4RGhOMDRnU3BPR0p4R0ZKQWtxbWFtZDRWKzNkU3hLb09TUXJDa1huWDEKcXNDYnZtdXlLNm8zb1FJNno0QnA0RG9hbDJNRWtKajRnb3RYY21GTlZBMTBQOEhCUnpjdnNPSWk2cUF3WlNXawp3MmR3TkRsQklFaDlQakFhRTBLQWNQK01LbVozWUNKbTArZ0ZCQUZ4eCtJbVNwR0QzQlNRa0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= [root@master 3a]#
创建一个 test-ns 命名空间范围的 RoleBinding admin权限是集群管理源用户, 可以在集群做任务操作 . 通过将admin权限的下放, 得到一个可以在 test-ns 命名空间 做所有操作的 RoleBinding
1 kubectl create rolebinding devuser-admin-binding --clusterrole=admin --user=devuser --namespace=test-ns
这里用到知识点: clusterrole 可以被 rolebinding 进行绑定权限,并指定命名空间,将其权限限制在某个命名空间内进行”传递”
创建 devuser 用户 并将刚才创建的 devuser.kubeconfig 复制到 devuser 用户 .kube 目录
并不先限定是master节点 可以任意安装了kubectl的主机
1 2 3 4 5 6 useradd devuser mkdir -p /home/devuser/.kube cp devuser.kubeconfig /home/devuser/.kube/config chown devuser:devuser -R /home/devuser/.kube su - devuser
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 [root@master 3a]# useradd devuser [root@master 3a]# mkdir -p /home/devuser/.kube [root@master 3a]# cp devuser.kubeconfig /home/devuser/.kube/config [root@master 3a]# chown devuser:devuser -R /home/devuser/.kube [root@master 3a]# su - devuser [devuser@master ~]$ ls -al 总用量 16 drwx------ 3 devuser devuser 96 12月 28 22:35 . drwxr-xr-x. 4 root root 36 12月 28 22:02 .. -rw------- 1 devuser devuser 175 12月 28 22:37 .bash_history -rw-r--r-- 1 devuser devuser 18 4月 1 2020 .bash_logout -rw-r--r-- 1 devuser devuser 193 4月 1 2020 .bash_profile -rw-r--r-- 1 devuser devuser 231 4月 1 2020 .bashrc drwxrwxr-x 2 devuser devuser 20 12月 28 22:39 .kube [devuser@master ~]$ ls -al .kube/* -rw------- 1 devuser devuser 5789 12月 28 22:37 .kube/config [devuser@master ~]$ cat .kube/config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQmJjYlRONDRGMGd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRFeE1Ea3hNRFV5TWpkYUZ3MHpOREV4TURjeE1EVTNNamRhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNhMUE2aklIK2o1cE5jbFFDck9BbGJwbmpxV3ZSUWxUQlBocmE1VEdQNEdSUkcwYkFmOXRKQmV1ZW8KbnhscG9GM2NOcjJaQUJteVlNNTJaQnlHM3dZckNIdzdrcitmNC90aVJPeEZhdC9vd2syK01PYjdDaStIWVJ4RgoyVlRYM0IrcU1uaTU1bzM3RlIwcVVYb2xjYVEyMlBrTml4SGZtdlp4RVVmcHVtTXFtbTFrdVVFdTJxS0tjbFpwCmNSNHRMQmdYR1NqQkNzdFNhTXd0THhhOXNUM0JMcnJCK1JFWWpWWHQ2eWcwQzZvdCtEZVp2dElTZHQyMXVPazkKOVNJMXJHa0NMd3hrZGI3UnlqV20xS3liaERFdW05aFRUT1pUbEdKY090ZFhYT3pTU3k5TXpEbkVRWEN3MngveApWazUzTk5Cb2lrSUlPZ2h3amlUV2RocVZvWURQQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUbXJGdWIyS3dKTnIyRXBBOVhQajQ2bWhnejZUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQUdQVzRlMUVFUAoxY2N6YlA2OWFwL1VrMGxLZmx0dXA3NEVyRGdhV1J1QXFxb1RBOTdNdnc0Qm53QWVYUkM0M1ZHSG1VWHM1TTdECi9DWUd1SWxXRVlhRGhtMmxzZzVnYVl3dDEyeTJjVDh6YkgrRno4ODBQQlNrbENEUmVuZUlReXNrc3hkdFFhNjUKcG9FVWFZalJOVzFWN1lxcUEweEpKQUtrWXZXVDhZM09ua2QwQkx1WWc2RjcyaXlMQ2xERmFINWFmSnZnOXh3cApBTDVYdHJuOERUajhwU0NFc2tFalh6UE14YnhISkpheTVwbUNLZTF5TDJNalRjZDFmcGYzUmN2Vks0RXd6UzlqCkliTXE4bktaV1pTYWlNd3pmdlVLa2RWSjY4SWY0aUJuQlVseEhJLzUzQ29SN2hGTFJYYzVJeEwyWG0vckJNOXAKaTdoL1BTdHBFelp2Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.16.200:6443 name: kubernetes contexts: - context: cluster: kubernetes namespace: test-ns user: devuser name: kubernetes current-context: "" kind: Config preferences: {} users: - name: devuser user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURoRENDQW15Z0F3SUJBZ0lVQjI5b0Vya2k2SU5ZaHFpY0FpNDdwYlRIT3pBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZURVRNQkVHQTFVRUF4TUthM1ZpWlhKdVpYUmxjekFlRncweU5ERXlNamd4TXpVME1EQmFGdzB5TlRFeQpNamd4TXpVME1EQmFNR0l4Q3pBSkJnTlZCQVlUQWtOT01SQXdEZ1lEVlFRSUV3ZENaV2xLYVc1bk1SQXdEZ1lEClZRUUhFd2RDWldsS2FXNW5NUXd3Q2dZRFZRUUtFd05yT0hNeER6QU5CZ05WQkFzVEJsTjVjM1JsYlRFUU1BNEcKQTFVRUF4TUhaR1YyZFhObGNqQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUw2TQpUcWVibUNjaDM5bXlLbFFNL0drdVQ2ZjRqSk5WUTVNdkNpazI4aTJZaWgxaG5NaHBHeXpxUjFVNEQrRDNpenJLCnBaWXhFbm8wdnRrdzlrZDV3bWtXZ0FXSk5IMzJiZjUxYm5Dd2dqaG0rU0FwWnBNQmNkVU5Wc0hjSGIxdHBWMVgKS3JRUFc0QmovaVZteWttQytIbXVNTjZmaXArRTlkdWNyeFFnck9WdUI1UDBCc21XSE1tNlUxak4wS0YyMTd1RAoxcHZFcmZhZHAwK285czdNV2p4Y0dXdTMrZjMxendwY2d0TDJsMWdBS0VDdHF6b3RYdDB0VzN0ZzFRQ3k5MkpJCk5HampJdGRLdDhzRU1sR0pyd2Z4WExEbndTa3E2VDlKc3JZVkZLMFRpWXVVdUZFdXBaa2c1RmRidXFwdi9vTVAKRGhhVjAwZ1Brb3NFajRkYzlNVUNBd0VBQWFOL01IMHdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CMEdBMVVkSlFRVwpNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjBHQTFVZERnUVdCQlI5ClM5NjF0cWxoQ2EvWTVhZHpnK3FmcEhiRlV6QWZCZ05WSFNNRUdEQVdnQlRtckZ1YjJLd0pOcjJFcEE5WFBqNDYKbWhnejZUQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFHb01RdURjT0szMGVRWmdPTStCSWMreEp6Tm1BSUVuVwpsVi9MSXBUcW5oWW1RcTFVS3pQTUJ0RGtoeklUaE83ODJvK2grT0Y3c3NHTEtRa0tBNUJuSCt2NUdOVmhwWTl0CnFmS0M3eGFtcXg3dHNHQ1ZCSDU2cjE3eHVHbmJSWDZnS1Rab2RKenJBTWZxbEkxRTNha05OMTVDdnN2R1pZWVgKVXFvaVkzb1c1TFMwNHdueWNzbUhDalRleW13UnhhVm1UazVoWGtxMVZsUEgyNVg5a3dmV2tuNEZtNjhBYXV3aApZRzZXa3NweFNrU2RBcGRyM2N0ektSSDFkQlZxb1lLSFN4amViOHV0MFBhaFh1bEZzNEpqeHh3QVk2MjQrdlhrCmM3UVcxY3YxeGZGVTZzaXU5SDdCOFRUM3NaeXNKbkorRERZN3JoRDJONks4RWhTYkY3TTR6Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBdm94T3A1dVlKeUhmMmJJcVZBejhhUzVQcC9pTWsxVkRreThLS1RieUxaaUtIV0djCnlHa2JMT3BIVlRnUDRQZUxPc3FsbGpFU2VqUysyVEQyUjNuQ2FSYUFCWWswZmZadC9uVnVjTENDT0diNUlDbG0Ka3dGeDFRMVd3ZHdkdlcybFhWY3F0QTliZ0dQK0pXYktTWUw0ZWE0dzNwK0tuNFQxMjV5dkZDQ3M1VzRIay9RRwp5WlljeWJwVFdNM1FvWGJYdTRQV204U3Q5cDJuVDZqMnpzeGFQRndaYTdmNS9mWFBDbHlDMHZhWFdBQW9RSzJyCk9pMWUzUzFiZTJEVkFMTDNZa2cwYU9NaTEwcTN5d1F5VVltdkIvRmNzT2ZCS1NycFAwbXl0aFVVclJPSmk1UzQKVVM2bG1TRGtWMXU2cW0vK2d3OE9GcFhUU0ErU2l3U1BoMXoweFFJREFRQUJBb0lCQVFDMmMySmMybjgxK1JsKwpPVHFPZ0dDdjFjZ3Y3YTJzNVZkdTl2dWp1eGpvejhadm02ZWp2Z0JuWVd3c0RTSW5KdUFKeTBBQ0w3cWhpUiswCmwwMDU0enhqbzBleUJVNWR6amhFRGUxUnViRDJrS0s2U09vT21MT0diTjlGZ0o1NVl5T1QzSUxuSmsxWEFtZTMKS0ZWSlRqN2RSQTFISFR4K3diRW9OejdzNXR5bVVLMTA4MzVTRFdzT2RiNm03YXY2SjQ4MmVBb0ZjRWFuM3VrYwo1SEZPaHNZMFRVSmQySVF1c0NNQ1pZMVB1OXdXVDE5NGxueGJpM0JMNmRFQjNobjlOMGhiK3cyRW8yc3FkeEY2CkNYczVocUtIOVdDdzlRUXc4a3lLOC9EMHhsZ0orNitwV0hSSVZpZktxeGdhMktST2hxUHlUN0R5ZFduZUJabDEKbEdOMlRqZ0JBb0dCQU9UeTIrU3ZWN0JlVi8wS3NMQ1NlQ1FkMGdwMDljOUhCUVM1QmR1M3dma3FnUStvZHo3dApoU1JTdmNrRDZjcG1xS0ZvdUtrMUI2eW4yL080aWpvWU5YdjV1d21UK0phVTEreVgxNUk1RkpSSk9CSEhsTkhPCk9ESXZheTRHSnRuSFdZWkhNT0dZN1QyNkpHanpOdFNBcjFlSmIvNmtudnhhQy81L2lpcHlWYUhWQW9HQkFOVVAKN0hQTkxJenhNSzBoclkvbmZZM0VDNXhoMW9YUS9rTEkrUEpwdFU4MGdGalQrOHRhWUdneHBUQmpUL3U4Q0ZQRQppL2E0T1Z3Mnh5dUlLV05RbUZvS2FnTUVQRFI1NjlOOEszNklkVUJJV2hidHJJaVhiZlhkbTI4OS8rNlhsSEVKCkZ0T1ZXM0wyekk5NGtreStJMlppUFNOSGdzeXEreUNwZ3l0M3g0OHhBb0dCQUlkTnk0eUQzNVBZdmJGS3Z3OHIKRUp0dmtERWozQjFxZ0ZuQkt1Z2wyaG54OTZJVVVweTY3R09DRHEwY2hlOWE2aSt4M3VnSThnY2trTVdoZXZkSQpWVnQyUkFZdUQ4eVdIR0d6ZnUvb2tmUHNyWms4VlFRRkZvcjZJU0pxK2t6Y0ZsbFgrMWhuODFUMmpBd0dLSkkvCmx1QnAxZWtzeXRTaU50SnA5M0tNYlhVZEFvR0FWRDVrbHZFa1VXSTRoZXhRRFJ0UitKRHdxbGZCRTg0c0Nzb2UKTFBPQkhoMDdObVF6SmhmSkVNbTRjQ2FFaEp1M2l5K215OW5SekZWWWNTejRlRzF3b0FHSUkwTTBidWRhU0pmTApOcy9MMUt3Ryt4UGs2V2src0QxOGJRTE54RkFwQUh6QWlzNStoemx3YnJZVTJzVS9pQWNGOTRJYUJNVUNZTXJGCnM1VTcwYkVDZ1lCSXBLODJvMGtrbUg4RGhOMDRnU3BPR0p4R0ZKQWtxbWFtZDRWKzNkU3hLb09TUXJDa1huWDEKcXNDYnZtdXlLNm8zb1FJNno0QnA0RG9hbDJNRWtKajRnb3RYY21GTlZBMTBQOEhCUnpjdnNPSWk2cUF3WlNXawp3MmR3TkRsQklFaDlQakFhRTBLQWNQK01LbVozWUNKbTArZ0ZCQUZ4eCtJbVNwR0QzQlNRa0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= [devuser@master ~]$
在 devuser 用户中 设置默认上下文 切换上下文信息为 上文中创建的config文件
1 kubectl config use-context kubernetes --kubeconfig=.kube/config
切换之后就可以使用devuser查看 test-ns 命名空间的资源了
1 2 3 4 5 6 7 8 9 [devuser@master ~]$ kubectl config use-context kubernetes --kubeconfig=.kube/config Switched to context "kubernetes". [devuser@master ~]$ kubectl get pod No resources found in test-ns namespace. [devuser@master ~]$ kubectl get rolebindings NAME ROLE AGE devuser-admin-binding ClusterRole/admin 15m read-secrets ClusterRole/secret-reader 22h [devuser@master ~]$
注意提示信息: No resources found in test-ns namespace. 此时默认的命名空间已经不在是 default 而是config配置文件中指定 test-ns
准入控制 准入控制器 是一段代码,它会在请求通过认证和鉴权之后、对象被持久化之前拦截到达 API 服务器的请求。
准入控制器可以执行验证(Validating) 和/或变更(Mutating) 操作。 变更(mutating)控制器可以根据被其接受的请求更改相关对象;验证(validating)控制器则不行。
准入控制器限制创建、删除、修改对象的请求。 准入控制器也可以阻止自定义动作,例如通过 API 服务器代理连接到 Pod 的请求。 准入控制器不会 (也不能)阻止读取(get、watch 或 list)对象的请求。
准入控制阶段 准入控制过程分为两个阶段。第一阶段,运行变更准入控制器。第二阶段,运行验证准入控制器。 再次提醒,某些控制器既是变更准入控制器又是验证准入控制器。
如果两个阶段之一的任何一个控制器拒绝了某请求,则整个请求将立即被拒绝,并向最终用户返回错误。
最后,除了对对象进行变更外,准入控制器还可能有其它副作用:将相关资源作为请求处理的一部分进行变更。 增加配额用量就是一个典型的示例,说明了这样做的必要性。 此类用法都需要相应的回收或回调过程,因为任一准入控制器都无法确定某个请求能否通过所有其它准入控制器
如何启用一个准入控制器 Kubernetes API 服务器的 enable-admission-plugins 标志接受一个(以逗号分隔的)准入控制插件列表, 这些插件会在集群修改对象之前被调用。
因为 kube-apiserver 服务启动在容器中, 可以参考如下命令查看
1 2 3 4 [root@master 3a]# kubectl exec kube-apiserver-master -n kube-system -- kube-apiserver -h | grep enable-admission-plugins --admission-control strings Admission is divided into two phases. In the first phase, only mutating admission plugins run. In the second phase, only validating admission plugins run. The names in the below list may represent a validating plugin, a mutating plugin, or both. The order of plugins in which they are passed to this flag does not matter. Comma-delimited list of: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, ClusterTrustBundleAttest, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. (DEPRECATED: Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.) --enable-admission-plugins strings admission plugins that should be enabled in addition to default enabled ones (NamespaceLifecycle, LimitRanger, ServiceAccount, TaintNodesByCondition, PodSecurity, Priority, DefaultTolerationSeconds, DefaultStorageClass, StorageObjectInUseProtection, PersistentVolumeClaimResize, RuntimeClass, CertificateApproval, CertificateSigning, ClusterTrustBundleAttest, CertificateSubjectRestriction, DefaultIngressClass, MutatingAdmissionWebhook, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook, ResourceQuota). Comma-delimited list of admission plugins: AlwaysAdmit, AlwaysDeny, AlwaysPullImages, CertificateApproval, CertificateSigning, CertificateSubjectRestriction, ClusterTrustBundleAttest, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, DenyServiceExternalIPs, EventRateLimit, ExtendedResourceToleration, ImagePolicyWebhook, LimitPodHardAntiAffinityTopology, LimitRanger, MutatingAdmissionWebhook, NamespaceAutoProvision, NamespaceExists, NamespaceLifecycle, NodeRestriction, OwnerReferencesPermissionEnforcement, PersistentVolumeClaimResize, PersistentVolumeLabel, PodNodeSelector, PodSecurity, PodTolerationRestriction, Priority, ResourceQuota, RuntimeClass, SecurityContextDeny, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook. The order of plugins in this flag does not matter. [root@master 3a]#
查询当前哪些插件被启用
1 ps -ef | grep kube-apiserver | grep enable-admission-plugins
1.28默认启动的插件有
1 2 3 4 CertificateApproval, CertificateSigning, CertificateSubjectRestriction, DefaultIngressClass, DefaultStorageClass, DefaultTolerationSeconds, LimitRanger, MutatingAdmissionWebhook, NamespaceLifecycle, PersistentVolumeClaimResize, PodSecurity, Priority, ResourceQuota, RuntimeClass, ServiceAccount, StorageObjectInUseProtection, TaintNodesByCondition, ValidatingAdmissionPolicy, ValidatingAdmissionWebhook
简单介绍其他几个插件
NamespaceLifecycle: 防止在不存在的 namespace 上创建对象,防止删除系统预置 namespace,删除 namespace 时,连带删除它的所有资源对象。
LimitRanger:确保请求的资源不会超过资源所在 Namespace 的 LimitRange 的限制。
ServiceAccount: 实现了自动化添加 ServiceAccount。
ResourceQuota:确保请求的资源不会超过资源的 ResourceQuota 限制
目前介绍的k8s版本是 v1.28.2 ,每个版本的准入控制存在差异, 具体功能简易以官方手册为准
官方参考文档
在不完全了解准入控制机制的情况见,建议使用默认开启的规则
虽然k8s支持的安全机制很多,当前企业中使用的以 https的双向认证 + RBAC + 官网默认的准入控制插件(基础上添加额外的定制化功能) .比较成熟稳定
Helm 什么是Helm 在没使用 helm 之前,向 kubernetes 部署应用,我们要依次部署 deployment、svc 等,步骤较繁琐。况且随 着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂,helm 通过打包的方式,支持发布的版本 管理和控制,很大程度上简化了 Kubernetes 应用的部署和管理
Helm 本质就是让 K8s 的应用管理(Deployment,Service 等 ) 可配置,能动态生成。通过动态生成 K8s 资源清 单文件(deployment.yaml,service.yaml)。然后调用 Kubectl 自动执行 K8s 资源部署
Helm 是官方提供的类似于 YUM 的包管理器,是部署环境的流程封装。Helm 有两个重要的概念:chart 和 release
chart 是创建一个应用的信息集合,包括各种 Kubernetes 对象的配置模板、参数定义、依赖关系、文档说明等。chart 是应用部署的自包含逻辑单元。可以将 chart 想象成 apt、yum 中的软件安装包
release 是 chart 的运行实例,代表了一个正在运行的应用。当 chart 被安装到 Kubernetes 集群,就生成一个 release。chart 能够多次安装到同一个集群,每次安装都是一个 release
Helm2 包含两个组件:Helm 客户端和 Tiller 服务器,如下图所示
Helm 客户端负责 chart 和 release 的创建和管理以及和 Tiller 的交互。Tiller 服务器运行在 Kubernetes 集群中,它会处理 Helm 客户端的请求,与 Kubernetes API Server 交互
Helm3 已经移除了Tiller. helm 与 k8s的版本需要兼容 , 目前我们使用的是helm3版本 .
部署Helm helm华为源
内网下载
1 wget http://192.168.16.110:9080/other/helm-v3.13.3-linux-amd64.tar.gz
1 2 3 4 5 6 7 8 9 10 11 12 [root@master pkg]# wget http://192.168.16.110:9080/other/helm-v3.13.3-linux-amd64.tar.gz --2024-12-29 00:52:53-- http://192.168.16.110:9080/other/helm-v3.13.3-linux-amd64.tar.gz 正在连接 192.168.16.110:9080... 已连接。 已发出 HTTP 请求,正在等待回应... 200 OK 长度:16188560 (15M) [application/octet-stream] 正在保存至: “helm-v3.13.3-linux-amd64.tar.gz” 100%[==================================================================================>] 16,188,560 34.7MB/s 用时 0.4s 2024-12-29 00:52:53 (34.7 MB/s) - 已保存 “helm-v3.13.3-linux-amd64.tar.gz” [16188560/16188560]) [root@master pkg]#
解压 / 安装
1 2 3 4 tar -xvzf helm-v3.13.3-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/bin/ chmod +x /usr/local/bin/helm
1 2 3 4 5 6 7 8 9 [root@master pkg]# tar -xvzf helm-v3.13.3-linux-amd64.tar.gz linux-amd64/ linux-amd64/LICENSE linux-amd64/README.md linux-amd64/helm [root@master pkg]# cp linux-amd64/helm /usr/local/bin/ [root@master pkg]# chmod +x /usr/local/bin/helm [root@master pkg]#
Helm 自定义模板 创建一个chart模板 1 2 3 4 helm create mychart # 删除模板文件. 后面演示自己写模板文件 rm -rf mychart/templates/*
修改 mychart/Chart.yaml 的内容 vi mychart/Chart.yaml
修改如下信息
1 2 name: hello-world version: 1.0.0
保存退出
创建模板文件, 用于生成 Kubernetes 资源清单 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 cat <<'EOF' > mychart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chart-hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: 192.168.16.110:20080/stady/myapp:v1 ports: - containerPort: 80 protocol: TCP EOF cat <<'EOF' > mychart/templates/service.yaml apiVersion: v1 kind: Service metadata: name: hello-world spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: hello-world EOF
使用 chart 安装 1 helm install hello-world ./mychart
可以看到 已经创建了对应的pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@master helm]# helm install hello-world ./mychart NAME: hello-world LAST DEPLOYED: Sun Dec 29 17:42:01 2024 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None [root@master helm]# [root@master helm]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION hello-world default 1 2024-12-29 17:42:01.44232187 +0800 CST deployed hello-world-1.0.0 1.16.0 [root@master helm]# kubectl get pod NAME READY STATUS RESTARTS AGE chart-hello-world-7645c4c97-d949z 1/1 Running 0 69s [root@master helm]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 10.106.220.11 <none> 80:31148/TCP 85s [root@master helm]#
将 chart包上传到内网 chart 仓库 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@master helm]# helm repo add myrepo http://192.168.16.110:38080 "myrepo" has been added to your repositories [root@master helm]# helm repo list NAME URL myrepo http://192.168.16.110:38080 [root@master helm]# helm package mychart Successfully packaged chart and saved it to: /data/app/k8s/helm/hello-world-1.0.0.tgz [root@master helm]# curl -F "chart=@./hello-world-1.0.0.tgz" http://192.168.16.110:38080/api/charts {"saved":true}[root@master helm]# [root@master helm]# helm repo update myrepo Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "myrepo" chart repository Update Complete. ⎈Happy Helming!⎈ [root@master helm]# [root@master helm]# helm search repo myrepo NAME CHART VERSION APP VERSION DESCRIPTION myrepo/hello-world 1.0.0 1.16.0 A Helm chart for Kubernetes [root@master helm]#
在模板文件中可以通过 .VAlues对象访变量 先卸载刚才安装的chart
1 2 3 [root@master helm]# helm uninstall hello-world release "hello-world" uninstalled [root@master helm]#
修改chart中的文件
1 2 3 4 5 cat <<'EOF' > mychart/values.yaml image: repository: 192.168.16.110:20080/stady/myapp tag: 'v1' EOF
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 cat <<'EOF' > mychart/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chart-hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: {{ .Values.image.repository }}:{{ .Values.image.tag }} ports: - containerPort: 80 protocol: TCP EOF
测试 使用–dry-run –debug 选项来打印出生成的清单文件内容,而不执行部署
1 helm install hello-world ./mychart --debug --dry-run
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 [root@master helm]# helm install hello-world ./mychart --debug --dry-run install.go:214: [debug] Original chart version: "" install.go:231: [debug] CHART PATH: /data/app/k8s/helm/mychart NAME: hello-world LAST DEPLOYED: Sun Dec 29 18:02:05 2024 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: {} COMPUTED VALUES: image: repository: 192.168.16.110:20080/stady/myapp tag: v1 HOOKS: MANIFEST: --- # Source: hello-world/templates/service.yaml apiVersion: v1 kind: Service metadata: name: hello-world spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: hello-world --- # Source: hello-world/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chart-hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: 192.168.16.110:20080/stady/myapp:v1 ports: - containerPort: 80 protocol: TCP [root@master helm]#
helm会先引用变量 生成资源清单时进行变量位置填充
在模板文件中的变量从命令行读取 在命令行中指定参数 填充变量. 虽然value.yaml中有过配置, 但是命令行的优先级高与配置文件
1 helm install hello-world ./mychart --set image.tag='v2' --debug --dry-run
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 [root@master helm]# helm install hello-world ./mychart --set image.tag='v2' --debug --dry-run install.go:214: [debug] Original chart version: "" install.go:231: [debug] CHART PATH: /data/app/k8s/helm/mychart NAME: hello-world LAST DEPLOYED: Sun Dec 29 18:05:28 2024 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: image: tag: v2 COMPUTED VALUES: image: repository: 192.168.16.110:20080/stady/myapp tag: v2 HOOKS: MANIFEST: --- # Source: hello-world/templates/service.yaml apiVersion: v1 kind: Service metadata: name: hello-world spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: hello-world --- # Source: hello-world/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: chart-hello-world spec: replicas: 1 selector: matchLabels: app: hello-world template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: 192.168.16.110:20080/stady/myapp:v2 ports: - containerPort: 80 protocol: TCP [root@master helm]#
helm 常用命令
命令
说明
参数
helm search
搜索 Helm charts
--version
: 指定版本搜索--keyword
: 指定关键字搜索--limit
: 限制搜索结果的数量--repo
: 指定特定的 Helm 仓库
helm pull
从仓库中拉取一个 chart 包
--repo
: 指定 chart 所在的仓库--version
: 指定 chart 的版本--untar
: 拉取后解压到当前目录--untardir
: 指定解压目录
helm install
安装一个新的 Helm release
--namespace
: 指定安装的命名空间--values
: 指定自定义的 values 文件--set
: 覆盖 values 文件中的值--version
: 指定 chart 的版本--wait
: 安装后等待所有 Pods 就绪--timeout
: 设置等待的超时时间
helm list
列出已安装的 Helm releases
--all-namespaces
: 列出所有命名空间的 releases--namespace
: 指定命名空间--filter
: 通过模式过滤 releases
helm upgrade
升级一个已存在的 Helm release
--namespace
: 指定安装的命名空间--values
: 指定自定义的 values 文件--set
: 覆盖 values 文件中的值--install
: 如果 release 不存在,则执行安装操作--wait
: 升级后等待所有 Pods 就绪--timeout
: 设置等待的超时时间
helm rollback
回滚到之前的 release 版本
--wait
: 回滚后等待所有 Pods 就绪--timeout
: 设置等待的超时时间
helm uninstall
卸载一个 Helm release
--namespace
: 指定命名空间
helm get
获取 release 的信息
helm get all [RELEASE_NAME]
: 获取指定 release 的所有信息helm get values [RELEASE_NAME]
: 获取指定 release 的 values--all-namespaces
: 获取所有命名空间的 releases 信息
helm repo
管理 Helm 仓库
helm repo add
: 添加一个新的 Helm 仓库helm repo list
: 列出已添加的 Helm 仓库helm repo remove
: 移除一个 Helm 仓库helm repo update
: 更新仓库的缓存
helm3 内置对象
Release 对象
Values 对象
Chart 对象
Capabilities 对象
Template 对象
Release对象 Helm 3中的内置对象Release描述了Helm版本发布本身的信息,它包含了以下属性:
Release.Name:版本发布的名称
Release.Namespace:版本发布的命名空间(如果manifest没有覆盖的话)
Release.IsUpgrade:如果当前操作是升级或回滚,则该值被设置为true
Release.IsInstall:如果当前操作是安装,则该值被设置为true
Release.Revision:此次修订的版本号。安装时是1,每次升级或回滚都会自增
Release.Service:渲染当前模板的服务名称,在Helm中始终是Helm
Values对象 在Helm 3中,Values对象是用来描述values.yaml文件中的内容的,它允许用户访问在该文件中定义的任何变量值。Values对象的内容可以来自以下几个来源
chart中的values.yaml文件:这是默认的配置文件,包含了chart的默认配置值
子chart的values.yaml文件:如果当前chart是一个子chart,那么它也会从父chart的values.yaml文件中继承配置
通过-f或–values指定的值文件:用户可以在执行helm install或helm upgrade时通过-f或–values参数指定额外的values.yaml文件来覆盖默认值
通过–set选项传入的值:用户可以在执行helm install或helm upgrade时通过–set选项直接传入键值对来覆盖默认值
Values对象的使用方式如下:
对于简单的键值对,可以直接通过.Values.key来访问 。 对于嵌套的键值对,可以通过点.来访问嵌套的值,例如.Values.info.name2来访问info对象下的name2字段 Values对象使得用户可以灵活地自定义和覆盖chart中的配置,以适应不同的部署需求。
Chart 对象 它包含了当前正在渲染的 chart 的信息。这个对象提供了一些有用的字段,可以帮助你在模板中引用 chart 的属性。以下是 Chart 对象的一些常用属性:
Chart.Name:chart 的名称。
Chart.Home:chart 的主页链接。
Chart.Version:chart 的版本号。
Chart.AppVersion:chart 应用的版本号,通常用于表示应用的版本,而不是 chart 本身的版本。
Chart.Description:chart 的描述。
Chart.Keywords:chart 的关键字列表。
Chart.Sources:chart 源代码的链接列表。
Chart.Icon:chart 的图标链接。
这些属性可以在 Helm 的模板文件中被引用,以便在 Kubernetes 清单文件中动态地插入 chart 的信息。例如, 你可以在 Kubernetes 的 Pod 定义中使用 Chart.Name 作为环境变量,或者在服务的注解中使用 Chart.Version。
Capabilities对象 它提供了关于 Kubernetes 集群支持的功能信息。以下是 Capabilities 对象的一些常用属性和描述:
Capabilities.APIVersions:返回 Kubernetes 集群支持的 API 版本信息集合。
Capabilities.APIVersion.Has $version:用于检测指定的版本或资源在 Kubernetes 集群中是否可用。
Capabilities.KubeVersion:提供了查找 Kubernetes 版本的方法。可以获取到 Major,Minor,GitVersion,GitCommit,GitTreeState,BuildDate,GoVersion,Compiler 和 Platform。
Capabilities.KubeVersion.Version:获取 Kubernetes 的版本号。
Capabilities.KubeVersion.Major:获取 Kubernetes 主版本号。
Capabilities.KubeVersion.Minor:获取 Kubernetes 小版本号。
Template 对象 它提供了关于当前正在执行的模板的信息。以下是 Template 对象的一些属性和功能:
Template.Name:表示当前模板的名称。这个名称对应于模板文件的路径,例如 mychart/templates/mytemplate.yaml 中的 mytemplate。
Template.BasePath:表示模板文件的基路径。这通常是 mychart/templates,其中 mychart 是 chart 的名称
演示样例 1 2 3 4 5 6 7 8 9 10 11 apiVersion: v1 kind: ConfigMap metadata: name: { { .Release.Name } } namespace: { { .Release.Namespace } } data: value1: { { .Release.IsUpgrade } } value2: { { .Release.IsInstall } } value3: { { .Release.Revision } } value4: { { .Release.Service } }
helm 发布/升级/回滚/卸载 安装 / 发布 获取chart 包的途径
1 2 3 4 5 6 7 8 # 创建chart helm create chart名 # 从repo途径获取的chart helm pull myrepo/mychart --version 0.1.0 # 网络下载 wget http://xxx/xxx.tgz
部署chart包防范
1 2 3 4 5 6 7 8 9 10 11 12 13 14 # 从加入到本地的chart官方仓库 安装release实例 helm install db stable/mysql # 从加入到本地的chart社区仓库 安装release实例 helm install my-tomcat test-repo/tomcat # 从chart仓库蜡袭来的压缩包解压后 使用压缩包安装 helm install db mysql-1.69.tgz # 从chart仓库蜡袭来的压缩包解压后 从解压目录安装 helm install db mysql # 从一个网络地址仓库压缩包之际安装 , db 为release实例名 helm install db http://url.../mysql-1.69.tgz
升级 1 2 3 4 # 命令行指定参数 helm upgrade release实例名 chart名 --set imageTag=v2 # 配置文件指定参数 helm upgrade release实例名 chart名 -f /../ychart/values.yal
回滚 1 2 helm rollback release实例名 helm rollback release实例名 版本号
获取release 实例历史