[root@master pod]# kubectl explain pod KIND: Pod VERSION: v1
DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string> Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <ObjectMeta> Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <PodSpec> Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <PodStatus> Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
[root@master pod]# kubectl explain pod.spec KIND: Pod VERSION: v1
FIELD: spec <PodSpec>
DESCRIPTION: Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status PodSpec is a description of a pod. FIELDS: activeDeadlineSeconds <integer> Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer.
affinity <Affinity> If specified, the pod's scheduling constraints ... ... terminationGracePeriodSeconds <integer> Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.
tolerations <[]Toleration> If specified, the pod's tolerations.
topologySpreadConstraints <[]TopologySpreadConstraint> TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.
volumes <[]Volume> List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes
[root@master pod]# kubectl apply -f my-pod.yaml pod/pod-demo created
查询pod创建结果
1 2 3 4
[root@master pod]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-demo 2/2 Running 0 2m58s 10.244.1.2 node1 <none> <none> [root@master pod]#
[root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-pod 0/1 PodInitializing 0 7m32s pod-demo 2/2 Running 0 54m [root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 7m43s pod-demo 2/2 Running 0 55m [root@master pod]#
就绪检测
readinessProbe:指示容器是否准备好服务请求。如果就绪探测失败,端点控制器将从与 Pod 匹配的所有 Service 的端点中删除该 Pod 的 IP 地址。初始延迟之前的就绪状态默认为 Failure。如果容器不提供就绪探针,则默认状态为 Success
[root@master pod]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ... readiness-httpget-pod 1/1 Running 0 6m54s 10.244.2.19 node2 <none> <none>
[root@master pod]# kubectl apply -f livenessProbe-exec.yml pod/liveness-exec-pod created [root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec-pod 1/1 Running 0 86s [root@master pod]# [root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE liveness-exec-pod 1/1 Running 0 86s [root@master pod]# kubectl get pod -w NAME READY STATUS RESTARTS AGE liveness-exec-pod 1/1 Running 1 (14s ago) 114s
[root@master pod]# kubectl get pod -w NAME READY STATUS RESTARTS AGE probe-tcp 1/1 Running 2 (4s ago) 29s probe-tcp 1/1 Running 3 (0s ago) 37s probe-tcp 0/1 CrashLoopBackOff 3 (0s ago) 49s probe-tcp 1/1 Running 4 (30s ago) 79s probe-tcp 1/1 Running 5 (0s ago) 91s
启动与退出动作
配置启动退出的配置文件 startAstopProc.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
apiVersion:v1 kind:Pod metadata: name:lifecycle-demo spec: containers: -name:lifecycle-demo-container image:192.168.16.110:20080/stady/myapp:v1 lifecycle: postStart: exec: command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"] preStop: exec: command: ["/bin/sh", "-c", "echo Hello from the poststop handler > /usr/share/message"]
查询 写入的信息
1 2 3 4 5 6 7 8
[root@master pod]# kubectl get pod NAME READY STATUS RESTARTS AGE lifecycle-demo 1/1 Running 0 52s [root@master pod]# [root@master pod]# kubectl exec lifecycle-demo -- cat /usr/share/message Hello from the postStart handler [root@master pod]#
退出的话容器会消失这个文件也会消失.这里不再做演示
资源控制器
Kubernetes 中内建了很多 controller(控制器),这些相当于一个状态机,用来控制 Pod 的具体状态和行为 控制器类型
[root@master conller]# kubectl apply -f RS.yml replicaset.apps/frontend created [root@master conller]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend-cp6qr 1/1 Running 0 9s frontend-r5r97 1/1 Running 0 9s frontend-ztw82 1/1 Running 0 9s
删除pod之后 再查看pod列表. 还会查看到三个pod(新建三个)
1 2 3 4 5 6 7 8 9 10
[root@master conller]# kubectl delete pod --all pod "frontend-cp6qr" deleted pod "frontend-r5r97" deleted pod "frontend-ztw82" deleted [root@master conller]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend-hkdxn 1/1 Running 0 5s frontend-t2p4v 1/1 Running 0 5s frontend-xpwqs 1/1 Running 0 5s [root@master conller]#
使用 –show-labels 参数会显示标签信息
1 2 3 4 5 6
[root@master conller]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS frontend-hkdxn 1/1 Running 0 70s tier=frontend frontend-t2p4v 1/1 Running 0 70s tier=frontend frontend-xpwqs 1/1 Running 0 70s tier=frontend [root@master conller]#
修改其中一个pod的标签, 自动会将 frontend 的pod 增加至三个(新增一个)
1 2 3 4 5 6 7 8 9
[root@master conller]# kubectl label pod frontend-hkdxn tier=frontend1 --overwrite=True pod/frontend-hkdxn labeled [root@master conller]# kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS frontend-9kwp8 1/1 Running 0 3s tier=frontend frontend-hkdxn 1/1 Running 0 3m33s tier=frontend1 frontend-t2p4v 1/1 Running 0 3m33s tier=frontend frontend-xpwqs 1/1 Running 0 3m33s tier=frontend [root@master conller]#
[root@master conller]# kubectl set image deployment/my-nginx my-nginx=192.168.16.110:20080/stady/myapp:v2 deployment.apps/my-nginx image updated [root@master conller]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-nginx-5d7f954ff4-92f76 1/1 Running 0 22s 10.244.2.31 node2 <none> <none> my-nginx-5d7f954ff4-dxxwb 1/1 Running 0 22s 10.244.1.29 node1 <none> <none> my-nginx-5d7f954ff4-fhlxp 1/1 Running 0 24s 10.244.2.30 node2 <none> <none> my-nginx-5d7f954ff4-hqnvb 1/1 Running 0 24s 10.244.1.28 node1 <none> <none> my-nginx-5d7f954ff4-n44r7 1/1 Running 0 24s 10.244.1.27 node1 <none> <none> [root@master conller]# curl 10.244.2.31:80 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@master conller]# [root@master conller]# kubectl describe deployment my-nginx Name: my-nginx Namespace: default CreationTimestamp: Sat, 21 Dec 2024 00:25:39 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 2 Selector: app=my-nginx Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=my-nginx Containers: my-nginx: Image: 192.168.16.110:20080/stady/myapp:v2 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: my-nginx-86bdff5685 (0/0 replicas created) NewReplicaSet: my-nginx-5d7f954ff4 (5/5 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set my-nginx-86bdff5685 to 3 Normal ScalingReplicaSet 7m15s deployment-controller Scaled up replica set my-nginx-86bdff5685 to 10 from 3 Normal ScalingReplicaSet 5m37s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 5 from 10 Normal ScalingReplicaSet 75s deployment-controller Scaled up replica set my-nginx-5d7f954ff4 to 2 Normal ScalingReplicaSet 75s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 4 from 5 Normal ScalingReplicaSet 75s deployment-controller Scaled up replica set my-nginx-5d7f954ff4 to 3 from 2 Normal ScalingReplicaSet 73s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 3 from 4 Normal ScalingReplicaSet 73s deployment-controller Scaled up replica set my-nginx-5d7f954ff4 to 4 from 3 Normal ScalingReplicaSet 73s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 2 from 3 Normal ScalingReplicaSet 72s (x3 over 73s) deployment-controller (combined from similar events): Scaled down replica set my-nginx-86bdff5685 to 0 from 1 [root@master conller]#
[root@master conller]# kubectl rollout undo deployment/my-nginx deployment.apps/my-nginx rolled back [root@master conller]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-nginx-86bdff5685-7czmv 1/1 Running 0 4s 10.244.1.30 node1 <none> <none> my-nginx-86bdff5685-fg72r 1/1 Running 0 4s 10.244.2.32 node2 <none> <none> my-nginx-86bdff5685-grmbr 1/1 Running 0 3s 10.244.2.33 node2 <none> <none> my-nginx-86bdff5685-nfff9 1/1 Running 0 3s 10.244.1.32 node1 <none> <none> my-nginx-86bdff5685-smm4w 1/1 Running 0 4s 10.244.1.31 node1 <none> <none> [root@master conller]# curl 10.244.1.30 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a> [root@master conller]# kubectl describe deployment my-nginx Name: my-nginx Namespace: default CreationTimestamp: Sat, 21 Dec 2024 00:25:39 +0800 Labels: <none> Annotations: deployment.kubernetes.io/revision: 3 Selector: app=my-nginx Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=my-nginx Containers: my-nginx: Image: 192.168.16.110:20080/stady/myapp:v1 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: my-nginx-5d7f954ff4 (0/0 replicas created) NewReplicaSet: my-nginx-86bdff5685 (5/5 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set my-nginx-86bdff5685 to 3 Normal ScalingReplicaSet 8m3s deployment-controller Scaled up replica set my-nginx-86bdff5685 to 10 from 3 Normal ScalingReplicaSet 6m25s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 5 from 10 Normal ScalingReplicaSet 2m3s deployment-controller Scaled up replica set my-nginx-5d7f954ff4 to 2 Normal ScalingReplicaSet 2m3s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 4 from 5 Normal ScalingReplicaSet 2m3s deployment-controller Scaled up replica set my-nginx-5d7f954ff4 to 3 from 2 Normal ScalingReplicaSet 2m1s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 3 from 4 Normal ScalingReplicaSet 2m1s deployment-controller Scaled up replica set my-nginx-5d7f954ff4 to 4 from 3 Normal ScalingReplicaSet 2m1s deployment-controller Scaled down replica set my-nginx-86bdff5685 to 2 from 3 Normal ScalingReplicaSet 13s (x12 over 2m1s) deployment-controller (combined from similar events): Scaled down replica set my-nginx-5d7f954ff4 to 0 from 1 [root@master conller]#
Deamonset
DaemonSet 确保全部(或者一些)Node 上运行一个 Pod 的副本。当有 Node 加入集群时,也会为他们新增一个Pod 。当有 Node 从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod
[root@master conller]# kubectl apply -f cronJob.yml cronjob.batch/hello created [root@master conller]# kubectl get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE hello */1 * * * * False 0 5s 47s [root@master conller]# kubectl get pod NAME READY STATUS RESTARTS AGE hello-28914570-6zxb2 0/1 Completed 0 36s [root@master conller]# kubectl get pod NAME READY STATUS RESTARTS AGE hello-28914570-6zxb2 0/1 Completed 0 70s hello-28914571-2czh5 0/1 Completed 0 10s [root@master conller]# [root@master conller]# kubectl get pod NAME READY STATUS RESTARTS AGE hello-28914570-6zxb2 0/1 Completed 0 2m2s hello-28914571-2czh5 0/1 Completed 0 62s hello-28914572-z8fqc 0/1 Completed 0 2s [root@master conller]# [root@master conller]# kubectl logs hello-28914570-6zxb2 Sun Dec 22 13:30:01 UTC 2024 Hello from the Kubernetes cluster [root@master conller]# [root@master conller]# kubectl logs hello-28914571-2czh5 Sun Dec 22 13:31:00 UTC 2024 Hello from the Kubernetes cluster [root@master conller]#
有时不需要或不想要负载均衡,以及单独的 Service IP 。遇到这种情况,可以通过指定 Cluster IP(spec.clusterIP) 的值为 “None” 来创建 Headless Service 。这类 Service 并不会分配 Cluster IP, kube-proxy 不会处理它们,而且平台也不会为它们进行负载均衡和路由
[root@master service]# kubectl apply -f myapp-svc-headless.yaml service/myapp-headless created [root@master service]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... myapp-headless ClusterIP None <none> 80/TCP 5s [root@master service]# [root@master service]# kubectl get pod -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ... kube-system coredns-66f779496c-mgdkr 1/1 Running 10 (23h ago) 43d 10.244.2.41 node2 <none> <none> kube-system coredns-66f779496c-rp7c8 1/1 Running 10 (23h ago) 43d 10.244.2.39 node2 <none> <none> ... [root@master service]# [root@master service]# dig -t A myapp-headless.default.svc.cluster.local. @10.244.2.41
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.16 <<>> -t A myapp-headless.default.svc.cluster.local. @10.244.2.41 ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64086 ;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;myapp-headless.default.svc.cluster.local. IN A
;; ANSWER SECTION: myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.43 myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.44 myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.42
[root@master service]# dig -t A myapp-headless.default.svc.cluster.local. @10.244.2.41
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.16 <<>> -t A myapp-headless.default.svc.cluster.local. @10.244.2.41 ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45162 ;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;myapp-headless.default.svc.cluster.local. IN A
;; ANSWER SECTION: myapp-headless.default.svc.cluster.local. 28 IN A 10.244.1.44 myapp-headless.default.svc.cluster.local. 28 IN A 10.244.2.42 myapp-headless.default.svc.cluster.local. 28 IN A 10.244.1.43
[root@master service]# kubectl apply -f ExternalName.yaml service/my-service-1 created [root@master service]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43d my-service-1 ExternalName <none> www.baidu.com <none> 4s myapp ClusterIP 10.97.220.149 <none> 80/TCP 33m myapp-headless ClusterIP None <none> 80/TCP 26m myapp-nodeport NodePort 10.99.67.70 <none> 80:31635/TCP 10m [root@master service]# dig -t A my-service-1.default.svc.cluster.local. @10.244.2.41
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.16 <<>> -t A my-service-1.default.svc.cluster.local. @10.244.2.41 ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16965 ;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;my-service-1.default.svc.cluster.local. IN A
;; ANSWER SECTION: my-service-1.default.svc.cluster.local. 30 IN CNAME www.baidu.com. www.baidu.com. 30 IN CNAME www.a.shifen.com. www.a.shifen.com. 30 IN A 220.181.38.149 www.a.shifen.com. 30 IN A 220.181.38.150
[root@master ingress]# kubectl apply -f http://192.168.16.110:9080/k8s/deployment/baremetal-ingress-deployment.yml namespace/ingress-nginx created serviceaccount/ingress-nginx created serviceaccount/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created configmap/ingress-nginx-controller created service/ingress-nginx-controller created service/ingress-nginx-controller-admission created deployment.apps/ingress-nginx-controller created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created [root@master ingress]# [root@master ingress]# kubectl get pod -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-64szf 0/1 Completed 0 30s 10.244.1.54 node1 <none> <none> ingress-nginx-admission-patch-9v4z5 0/1 Completed 1 30s 10.244.2.49 node2 <none> <none> ingress-nginx-controller-749f794b9-h8ht5 1/1 Running 0 30s 10.244.1.55 node1 <none> <none> [root@master ingress]# [root@master ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.110.234.117 <none> 80:32454/TCP,443:31862/TCP 3m42s ingress-nginx-controller-admission ClusterIP 10.97.39.200 <none> 443/TCP 3m42s [root@master ingress]#