容器探测用于检测容器中的应用实例是否正常工作,是保障业务可用性的一种传统机制。如果经过探测,实例的状态不符合预期,那么kubernetes就会把该问题实例" 摘除 ",不承担业务流量。kubernetes提供了两种探针来实现容器探测,分别是:
【资料图】
liveness probes:存活性探针,用于检测应用实例当前是否处于正常运行状态,如果不是,k8s会重启容器readiness probes:就绪性探针,用于检测应用实例当前是否可以接收请求,如果不能,k8s不会转发流量livenessProbe 决定是否重启容器,readinessProbe 决定是否将请求转发给容器。
上面两种探针目前均支持三种探测方式:
Exec命令:在容器内执行一次命令,如果命令执行的退出码为0,则认为程序正常,否则不正常…… livenessProbe: exec: command: - cat - /tmp/healthy……
TCPSocket:将会尝试访问一个用户容器的端口,如果能够建立这条连接,则认为程序正常,否则不正常…… livenessProbe: tcpSocket: port: 8080……
HTTPGet:调用容器内Web应用的URL,如果返回的状态码在200和399之间,则认为程序正常,否则不正常…… livenessProbe: httpGet: path: / #URI地址 port: 80 #端口号 host: 127.0.0.1 #主机地址 scheme: HTTP #支持的协议,http或者https……
2、存活性探针(1)Exec模式创建pod-liveness-exec.yaml。
apiVersion: v1kind: Podmetadata: name: pod-liveness-exec namespace: devspec: containers: - name: nginx image: nginx ports: - name: nginx-port containerPort: 80 livenessProbe: exec: command: ["/bin/cat","/tmp/hello.txt"] # 执行一个查看文件的命令
因为/tmp/hello.txt 文件不存在,导致反复重启容器。
#进入yaml目录[root@k8s-master ~]# lsanaconda-ks.cfg pod-liveness-exec.yaml[root@k8s-master ~]# clear[root@k8s-master ~]#[root@k8s-master ~]##查看当前dev命名空间下无pod[root@k8s-master ~]# kubectl get pod -n devNo resources found in dev namespace.[root@k8s-master ~]##创建pod[root@k8s-master ~]# kubectl apply -f pod-liveness-exec.yamlpod/pod-liveness-exec created[root@k8s-master ~]##再次查看dev命名空间已创建pod[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-exec 0/1 ContainerCreating 0 5s#查看pod详情[root@k8s-master ~]# kubectl describe pod pod-liveness-exec -n dev#在最下面Events中,容器创建过程报错Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28s default-scheduler Successfully assigned dev/pod-liveness-exec to k8s-node2 Normal Pulling 27s kubelet Pulling image "nginx" Normal Pulled 11s kubelet Successfully pulled image "nginx" in 15.48165061s Normal Created 11s kubelet Created container nginx Normal Started 11s kubelet Started container nginx Warning Unhealthy 7s kubelet Liveness probe failed: /bin/cat: /tmp/hello.txt: No such file or directory#再次频繁查看dev下pod信息可以发现 pod的重启次数一直在增加[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-exec 1/1 Running 3 (53s ago) 2m54s[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-exec 1/1 Running 4 (20s ago) 3m1s
这个时候我们将 yaml改为查看已存在的文件。
apiVersion: v1kind: Podmetadata: name: pod-liveness-exec namespace: devspec: containers: - name: nginx image: nginx ports: - name: nginx-port containerPort: 80 livenessProbe: exec: command: ["/bin/cat","/usr/share/nginx/html/index.html"] # 执行一个查看文件的命令 此文件为nginx欢迎页面 因为我们的容器就是nginx所以这个文件肯定存在的
再看效果:
(2)TCPSocket模式创建pod-liveness-tcpsocket.yaml。
apiVersion: v1kind: Podmetadata: name: pod-liveness-tcpsocket namespace: devspec: containers: - name: nginx image: nginx ports: - name: nginx-port containerPort: 80 livenessProbe: tcpSocket: port: 8080 # 尝试访问8080端口,端口不存在
因为容器中8080端口未开通,所以连接失败。
[root@k8s-master ~]# lsanaconda-ks.cfg pod-liveness-tcpsocket.yaml[root@k8s-master ~]##创建pod[root@k8s-master ~]# kubectl apply -f pod-liveness-tcpsocket.yamlpod/pod-liveness-tcpsocket created[root@k8s-master ~]##获取pod[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-tcpsocket 1/1 Running 0 12s#查看详情可以看出最后一步 连接被拒绝[root@k8s-master ~]# kubectl describe pod pod-liveness-tcpsocket -n devEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23s default-scheduler Successfully assigned dev/pod-liveness-tcpsocket to k8s-node2 Normal Pulling 22s kubelet Pulling image "nginx" Normal Pulled 21s kubelet Successfully pulled image "nginx" in 475.556438ms Normal Created 21s kubelet Created container nginx Normal Started 21s kubelet Started container nginx Warning Unhealthy 2s (x2 over 12s) kubelet Liveness probe failed: dial tcp 172.17.169.138:8080: connect: connection refused#频繁获取pod详情可以看出 重启次数也在不断增加[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-tcpsocket 1/1 Running 3 (32s ago) 2m13s[root@k8s-master ~]#[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-tcpsocket 1/1 Running 3 (46s ago) 2m27s[root@k8s-master ~]#[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-tcpsocket 1/1 Running 4 (16s ago) 2m37s
然后将 tcpSocket.port 改为80 再重复以上步骤就会发现,容器正常启动。
(3)HTTPGet模式创建pod-liveness-httpget.yaml。
apiVersion: v1kind: Podmetadata: name: pod-liveness-httpget namespace: devspec: containers: - name: nginx image: nginx ports: - name: nginx-port containerPort: 80 livenessProbe: httpGet: # 其实就是访问http://127.0.0.1:80/hello scheme: HTTP #支持的协议,http或者https port: 80 #端口号 path: /hello #URI地址 此地址不存在
[root@k8s-master ~]##创建pod[root@k8s-master ~]# kubectl apply -f pod-liveness-httpget.yamlpod/pod-liveness-httpget created#获取pod[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-httpget 0/1 ContainerCreating 0 7s[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-httpget 0/1 ContainerCreating 0 13s#查看pod详情 发现最后HTTP执行报404[root@k8s-master ~]# kubectl describe pod pod-liveness-httpget -n devEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 24s default-scheduler Successfully assigned dev/pod-liveness-httpget to k8s-node2 Normal Pulling 23s kubelet Pulling image "nginx" Normal Pulled 8s kubelet Successfully pulled image "nginx" in 15.416092349s Normal Created 8s kubelet Created container nginx Normal Started 8s kubelet Started container nginx Warning Unhealthy 4s kubelet Liveness probe failed: HTTP probe failed with statuscode: 404#频繁获取pod,发现pod重启次数不断增加[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-httpget 1/1 Running 0 36s[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-httpget 1/1 Running 1 (3s ago) 43s[root@k8s-master ~]# kubectl get pod -n devNAME READY STATUS RESTARTS AGEpod-liveness-httpget 1/1 Running 2 (47s ago) 117s
然后将 httpGet.path 改为/ 再重复以上步骤就会发现,容器正常启动。
至此,已经使用liveness Probe演示了三种探测方式,但是查看livenessProbe的子属性,会发现除了这三种方式,还有一些其他的配置,在这里一并解释下:
[root@k8s-master01 ~]# kubectl explain pod.spec.containers.livenessProbeFIELDS: exec