一、Static Pod
靜態Pod是由kubectl進行管理的僅存於特定Node上的Pod。其不能通過API Server進行管理,無法與ReplicationController、Deployment或者DaemonSet進行關聯,並且kubelet也無法對他們進行健康檢查。靜態Pod總是由kubectl進行創建,並且總是在kubelet所在的Node上運行。
創建靜態 Pod 有兩種方式:配置文件和 HTTP 兩種方式
由kubeadm安裝的集羣,對應的kubelet已經配置了靜態Pod文件的路徑
# cat /var/lib/kubelet/config.yaml |grep staticPodPath
staticPodPath: /etc/kubernetes/manifests
1.配置一個靜態Pod的yaml文件放入該路徑
# cat static-web.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-web
labels:
name: static-web
spec:
containers:
- name: static-web
image: nginx
ports:
- name: web
containerPort: 80
2.查看docker進程
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b682d21563dd nginx "nginx -g 'daemon of…" 6 minutes ago Up 6 minutes k8s_static-web_static-web-k8s-2_default_a850d62a685464dd2c0bdb31222085c9_0
3.查看Pod創建情況
# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 1/1 Running 0 7m14s
4.嘗試刪除該靜態Pod
[root@K8S-1 chapter1]# kubectl delete pod static-web-k8s-2
pod "static-web-k8s-2" deleted
[root@K8S-1 chapter1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 0/1 Pending 0 1s
[root@K8S-1 chapter1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 0/1 Pending 0 4s
[root@K8S-1 chapter1]# kubectl get pod
NAME READY STATUS RESTARTS AGE
static-web-k8s-2 1/1 Running 0 6s
5.刪除/etc/kubernetes/manifests下的yaml文件
# kubectl get pod
No resources found.
二、Pod容器共享Volume
同一個Pod裏面的多個容器能夠共享Pod級別的存儲卷Volume,Volume可以被定義爲各種類型,多個容器各自進行掛載操作,進行數據共享。
配置yaml文件
apiVersion: v1
kind: Pod
metadata:
name: volume-pod
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- name: app-logs
mountPath: /usr/local/tomcat/logs
- name: busybox
image: busybox
command: ["sh", "-c", "tail -f /logs/catalina*.log"]
volumeMounts:
- name: app-logs
mountPath: /logs
volumes:
- name: app-logs
emptyDir: {}
創建pod
# kubectl apply -f pod-volume-logs.yaml
pod/volume-pod created
# kubectl get pod
NAME READY STATUS RESTARTS AGE
volume-pod 2/2 Running 0 4m42s
pod中有兩個容器,一個是tomcat用於寫入日誌文件,另一個busybox用於讀日誌文件
# kubectl logs volume-pod -c busybox
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [26] ms
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [380] ms
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [38] ms
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
31-May-2019 16:54:40.025 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [32] ms
31-May-2019 16:54:40.031 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
31-May-2019 16:54:40.045 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
31-May-2019 16:54:40.093 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 992 ms
# kubectl exec -it volume-pod -c tomcat -- ls /usr/local/tomcat/logs
catalina.2019-05-31.log localhost_access_log.2019-05-31.txt
host-manager.2019-05-31.log manager.2019-05-31.log
localhost.2019-05-31.log
# kubectl exec -it volume-pod -c tomcat -- tail /usr/local/tomcat/logs/catalina.2019-05-31.log
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [26] ms
31-May-2019 16:54:39.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [380] ms
31-May-2019 16:54:39.954 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [38] ms
31-May-2019 16:54:39.992 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
31-May-2019 16:54:40.025 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [32] ms
31-May-2019 16:54:40.031 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
31-May-2019 16:54:40.045 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
31-May-2019 16:54:40.093 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 992 ms
三、Pod的應用配置管理ConfigMap
ConfigMap提供了將配置數據注入容器的機制,同時保持容器不受kubernetes的影響。ConfigMap有以下幾種使用方式:
1.生成容器內的環境變量
2.設置容器內的命令行參數
3.以Volume的形式掛載爲容器內部的文件或目錄
ConfigMap可以通過yaml配置文件或直接使用kubectl create configmap命令行方式來創建
1.從目錄創建
當--from-file指向一個目錄,該目錄中的文件名將直接用於填充ConfigMap中的key,key的值是這個文件的內容
# ls
my.cnf web.xml
# cat my.cnf
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
# cat web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<distributable/>
......
......
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
</web-app>
# kubectl create configmap test1 --from-file configfiles
configmap/test1 created
# kubectl describe configmap test1
Name: test1
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
my.cnf:
----
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
web.xml:
----
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<distributable/>
......
......
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
</web-app>
Events: <none>
2.從文件創建
通過--from-file從文件創建,可以指定key的名稱,也可以在一個命令行中創建包含多個key的ConfigMap
# kubectl create configmap test2 --from-file=my.cnf --from-file=web.xml
configmap/test2 created
# kubectl get configmap test2 -o yaml
apiVersion: v1
data:
my.cnf: |
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
web.xml: |
<?xml version="1.0" encoding="ISO-8859-1"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
version="3.0">
<distributable/>
......
......
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
</web-app>
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-01T10:49:09Z"
name: test2
namespace: default
resourceVersion: "329312"
selfLink: /api/v1/namespaces/default/configmaps/test2
uid: e260c9aa-845a-11e9-a2f2-00505694834d
也可不以文件文件名爲key,通過key=value爲每個文件重新設置key
# kubectl create configmap test3 --from-file=the.cnf=my.cnf
configmap/test3 created
# kubectl get configmap test3 -o yaml
apiVersion: v1
data:
the.cnf: |
general_log=on
slow_query_log=on
long_query_time = 4
log_bin=on
log-bin=/usr/local/mysql/data/bin.log
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-01T10:53:41Z"
name: test3
namespace: default
resourceVersion: "329706"
selfLink: /api/v1/namespaces/default/configmaps/test3
uid: 849a9eab-845b-11e9-a2f2-00505694834d
3.使用--from-literal直接在命令行中指定key,value
# kubectl create configmap test4 --from-literal=type=null --from-literal=dir=/var/log
configmap/test4 created
# kubectl describe configmap test4
Name: test4
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
dir:
----
/var/log
type:
----
null
Events: <none>
4.通過環境變量方式使用ConfigMap
# cat cm-app.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-app
data:
apploglevel: info
appdatadir: /var/data
# cat cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-env
data:
APPTYPE: char
# kubectl create -f cm-app.yaml -f cm-env.yaml
configmap/cm-app created
configmap/cm-env created
# cat cm-test.yaml
apiVersion: v1
kind: Pod
metadata:
name: cm-test
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "env | grep APP" ]
env:
- name: APPLOGLEVEL
valueFrom:
configMapKeyRef:
name: cm-app
key: apploglevel
- name: APPDATADIR
valueFrom:
configMapKeyRef:
name: cm-app
key: appdatadir
envFrom:
- configMapRef:
name: cm-env
restartPolicy: Never
# kubectl create -f cm-test.yaml
pod/cm-test created
# kubectl logs cm-test
APPDATADIR=/var/data
APPTYPE=char
APPLOGLEVEL=info
上面使用了兩種定義環境變量的方式:env和envFrom,使用envFrom會在Pod環境中將ConfigMap中所有定義的key=value自動生成爲環境變量
5.通過volumeMount使用ConfigMap
# cat cm-app.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-app
data:
apploglevel: info
appdatadir: /var/data
# cat cm-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: cm-volume
spec:
containers:
- name: cm-volume
image: busybox
command: [ "/bin/sh","-c","cat /etc/config/path/key.app" ]
volumeMounts:
- name: volume-test #引用volume的名稱
mountPath: /etc/config #掛載到容器內的目錄
volumes:
- name: volume-test #定義volume的名稱
configMap:
name: cm-app #使用的ConfigMap
items:
- key: apploglevel
path: path/key.app #value將寫入key.app文件中
restartPolicy: Never
創建完ConfigMap和Pod後,該Pod會輸出:
# kubectl logs cm-volume
info
如果在引用ConfigMap時不指定items,則使用volumeMounts方式在容器內的目錄下爲每個items都生成一個以key開頭的文件。
使用ConfigMap的限制條件:
- ConfigMap必須在Pod之前創建
- ConfigMap受Namespace限制,只有處於相同Namespace中的Pod才能引用
- 靜態Pod無法引用ConfigMap
四、獲取容器內Pod信息 Downward API
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
五、Pod狀態和健康檢查
1.Pod的狀態
- 掛起(Pending):API Server已經創建該Pod,但在Pod內還有一個或多個容器的鏡像沒有創建,包括正在下載的過程
- 運行中(Running):Pod內所有容器均已創建,且至少有一個容器處於運行、啓動、重啓狀態
- 成功(Succeeded):Pod 中的所有容器都被成功執行後退出,並且不會再重啓
- 失敗(Failed):Pod 中的所有容器都已終止了,但至少有一個容器退出爲失敗狀態,也就是說,容器以非0狀態退出或者被系統終止。
- 未知(Unknown):因爲某些原因無法取得 Pod 的狀態,通常是因爲與 Pod 所在主機通信失敗
2.Pod的重啓策略
Pod的重啓策略RestartPolicy可能的值爲 Always、OnFailure 和 Never,默認爲 Always
- Always:當容器失效時,由kubelet自動重啓
- OnFailure:當容器終止運行且退出碼不爲0時,由kubelet自動重啓
- Never:不論容器運行狀態如何都不會重啓
3.Pod健康檢查
LivenessProbe:存活性探測
ReadnessProbe:就緒性探測
其存活性探測的方法可配置以下三種實現方式:
- ExecAction:在容器內執行指定命令。如果命令退出時返回碼爲 0 則表明容器健康
- TCPSocketAction:對指定端口上的容器的 IP 地址進行 TCP 檢查。如果能夠建立連接,則表明容器健康。
- HTTPGetAction:對指定的端口和路徑上的容器的 IP 地址執行 HTTP Get 請求。如果響應的狀態碼大於等於200 且小於 400則表明容器健康
設置exec探針
# cat pod-exec.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- echo ok > /tmp/health; sleep 10; rm -rf /tmp/health; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
查看Pod事件
# kubectl describe pod liveness-exec
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m10s default-scheduler Successfully assigned default/liveness-exec to k8s-2
Normal Pulled 77s (x3 over 3m59s) kubelet, k8s-2 Successfully pulled image "busybox"
Normal Created 76s (x3 over 3m59s) kubelet, k8s-2 Created container liveness
Normal Started 76s (x3 over 3m59s) kubelet, k8s-2 Started container liveness
Warning Unhealthy 35s (x9 over 3m35s) kubelet, k8s-2 Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
Normal Killing 35s (x3 over 3m15s) kubelet, k8s-2 Container liveness failed liveness probe, will be restarted
Normal Pulling 5s (x4 over 4m10s) kubelet, k8s-2 Pulling image "busybox"
# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-exec 1/1 Running 3 4m50s
#此時restart值爲3
設置tcp探針
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
設置http探針
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /_status/healthz
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m37s default-scheduler Successfully assigned default/liveness-http to k8s-2
Warning Unhealthy 14s (x6 over 104s) kubelet, k8s-2 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 14s (x2 over 84s) kubelet, k8s-2 Container nginx failed liveness probe, will be restarted
Normal Pulling 13s (x3 over 2m36s) kubelet, k8s-2 Pulling image "nginx"
Normal Pulled 5s (x3 over 2m19s) kubelet, k8s-2 Successfully pulled image "nginx"
Normal Created 5s (x3 over 2m18s) kubelet, k8s-2 Created container nginx
Normal Started 5s (x3 over 2m18s) kubelet, k8s-2 Started container nginx
# kubectl get pod
NAME READY STATUS RESTARTS AGE
liveness-http 1/1 Running 2 3m31s
對於每種探測方式,需要設置initialDelaySeconds和timeoutSeconds參數,分別表示首次檢查等待時間以及超時時間。