Kubernetes CKA真題解析-20200402真題

1.日誌 kubectl logs

監控 foobar Pod 的日誌,提取 pod 相應的行’error’寫入到/logs 文件中

Set configuration context $ kubectl config use-context k8s Monitor the logs of Pod foobar and
	Extract log lines corresponding to error file-not-found
	Write them to /opt/KULM00201/foobar
  kubectl logs foobar | grep file-not-found > /logs

2.輸出排序 --sort-by=.metadata.name

使用 name 排序列出所有的 PV,把輸出內容存儲到/opt/文件中 使用 kubectl own 對輸出進行排序,並且不再進一步操作它

List all PVs sorted by name saving the full kubectl output to /opt/KUCC0010/my_volumes . Use kubectl’s own functionally for sorting the output, and do not manipulate it any further.
  kubectl get pv --all-namespace --sort-by=.metadata.name > /opt/

3.ds部署

確保在 kubectl 集羣的每個節點上運行一個 Nginx Pod。其中 Nginx Pod 必須使用 Nginx 鏡像。不要覆蓋當前環境中的任何 traints。 使用 Daemonset 來完成這個任務,Daemonset 的名字使用 ds。

Ensure a single instance of Pod nginx is running on each node of the kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place.

Use Daemonsets to complete this task and use ds.kusc00201 as Daemonset name
題目對應文檔:https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
刪除tolerations字段,複製到image: gcr.io/fluentd-elasticsearch/fluentd:v2.5.1這裏即可,再按題意更改yaml文件。
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds
  namespace: kube-system
  labels:
	k8s-app: fluentd-logging
spec:
  selector:
	matchLabels:
	  name: fluentd-elasticsearch
  template:
	metadata:
	  labels:
		name: fluentd-elasticsearch
	spec:
	  containers:
	  - name: fluentd-elasticsearch
		image: nginx

4.initContainers

添加一個 initcontainer 到 lum(/etc/data)這個 initcontainer 應該創建一個名爲/workdir/calm.txt 的空文件,如果/workdir/calm.txt 沒有被檢測到,這個 Pod 應該退出

	Add an init container to lumpy--koala (Which has been defined in spec file /opt/kucc00100/pod-spec-KUCC00100.yaml)
	The init container should create an empty file named /workdir/calm.txt
	If /workdir/calm.txt is not detected, the Pod should exit
	Once the spec file has been updated with the init container definition, the Pod should be created.
    • 題目中yaml文件已經給出,只需要增加initcontainers部分,以及emptyDir: {} 即可
      init文檔位置:https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
      
    在最後加上
    initContainers:
    • name: init-poda
      image: busybox
      command: [‘sh’,’-c’,‘touch /workdir/clam.txt’]
      volumeMounts:
    • name: workdir
      mountPath: “/workdir”
    
      
    
    

5.多容器

創建一個名爲 kucc 的 Pod,其中內部運行着 nginx+redis+memcached+consul 4 個容器

Create a pod named kucc4 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul
https://v1-14.docs.kubernetes.io/docs/concepts/workloads/pods/pod-overview/
	apiVersion: v1
	kind: Pod
	metadata:
	  name: kucc
	spec:
	  containers:
	  - name: nginx
		image: nginx
	  - name: redis
		image: redis
	  - name: memcached
		image: memcached
	  - name: consul
		image: consul

6.nodeSelector

創建 Pod,名字爲 nginx,鏡像爲 nginx,部署到 label disk=ssd的node上

Schedule a Pod as follows:
	Name: nginx-kusc00101
	Image: nginx
	Node selector: disk=ssd
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
apiVersion: v1
	kind: Pod
	metadata:
	  name: nginx
	  labels:
		env: test
	spec:
	  containers:
	  - name: nginx
		image: nginx
		imagePullPolicy: IfNotPresent
	  nodeSelector:
		disk: ssd

7.deployment升級和回退(set image --record rollout undo)

創建 deployment 名字爲 nginx-app 容器採用 1.11.9 版本的 nginx 這個 deployment 包含 3 個副本,接下來通過滾動升級的方式更新鏡像版本爲 1.12.0,並記錄這個更新,最後,回滾這個更新到之前的 1.11.9 版本

Create a deployment as follows
	Name: nginx-app
	Using container nginx with version 1.10.2-alpine
	The deployment should contain 3 replicas
Next, deploy the app with new version 1.13.0-alpine by performing a rolling update and record that update.
Finally, rollback that update to the previous version 1.10.2-alpine
kubectl run deployment nginx-app --image=nginx:1.11.9 --replicas=3
kubectl set image deployment nginx-app nginx-app=nginx:1.12.0 --record  (nginx-app container名字)
kubectl rollout history deployment nginx-app
kubectl rollout undo deployment nginx-app

8.NodePort

創建和配置 service,名字爲 front-end-service。可以通過 NodePort/ClusterIp 開訪問,並且路由到 front-end 的 Pod 上

Create and configure the service front-end-service so it’s accessible through NodePort and routes to the existing pod named front-end
kubectl expose pod fron-end --name=front-end-service --port=80  --type=NodePort

9.namespace

創建一個 Pod,名字爲 Jenkins,鏡像使用 Jenkins。在新的 namespace website-frontend 上創建Pod

 Create a Pod as follows:
	Name: jenkins
	Using image: jenkins
	In a new Kubenetes namespace named website-frontend 
kubectl create ns website-frontend

apiVersion: v1
kind: Pod
metadata:
  name: Jenkins
  namespace: website-frontend
spec:
  containers:
  - name: Jenkins
	image: Jenkins
	
kubectl apply -f ./xxx.yaml 	

10.kubectl run ${deploy-name} --image=’’ --lables=’’ --dry-run -o yaml

創建 deployment 的 spec 文件: 使用 redis 鏡像,7 個副本,label 爲 app_enb_stage=dev deployment 名字爲 kual00201 保存這個 spec 文件到/opt/KUAL00201/deploy_spec.yaml完成後,清理(刪除)在此任務期間生成的任何新的 k8s API 對象

Create a deployment spec file that will:
	Launch 7 replicas of the redis image with the label: app_env_stage=dev
	Deployment name: kual0020
Save a copy of this spec file to /opt/KUAL00201/deploy_spec.yaml (or .json)
When you are done, clean up (delete) any new k8s API objects that you produced during this task 
kubectl run kual00201 --image=redis --labels=app_enb_stage=dev --dry-run -oyaml > /opt/KUAL00201/deploy_spec.yaml

11.根據service的selector查詢pod

創建一個文件/opt/kucc.txt ,這個文件列出所有的 service 爲 foo ,在 namespace 爲 production 的 Pod,這個文件的格式是每行一個 Pod的名字

Create a file /opt/KUCC00302/kucc00302.txt that lists all pods that implement Service foo in Namespace production.
The format of the file should be one pod name per line
kubectl get svc -n production --show-labels | grep foo

kubectl get pods -l app=foo(label標籤)  | grep -v NAME | awk '{print $1}' >> /opt/KUCC00302/kucc00302.txt

12.secret掛載

創建一個secret,名字爲super-secret包含用戶名bob,創建pod1掛載該secret,路徑爲/secret,創建pod2,使用環境變量引用該secret,該變量的環境變量名爲ABC

Create a Kubernetes Secret as follows:
	Name: super-secret
	Credential: alice  or username:bob 
Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets
Create a second Pod named pod-secrets-via-env using the redis image, which exports credential as TOPSECRET
https://kubernetes.io/zh/docs/concepts/configuration/secret/#%E8%AF%A6%E7%BB%86
echo -n "bob" | base64

apiVersion: v1
kind: Secret
metadata:
  name: super-secret
type: Opaque
data:
  username: Ym9i
  
apiVersion: v1
kind: Pod
metadata:
  name: pod1
spec:
containers:
- name: mypod
  image: redis
  volumeMounts:
- name: foo
  mountPath: "/secret"
  readOnly: true
volumes: secret
- name: foo
  secret:
    secretName: super-secret


apiVersion: v1
kind: Pod
metadata:
  name: pod-evn-eee
spec:
containers:
- name: mycontainer
image: redis
env:
- name: ABC
    valueFrom:
      secretKeyRef:
        name: super-secret
        key: username
restartPolicy: Never

13.emptyDir

創建一個pod,名爲non-presistent-redis,掛載存儲卷,卷名爲:cache-control,掛載到本地的:/data/redis目錄下,在名稱空間pre-prod裏做,不要以持久卷方式掛載。

Create a pod as follows:
	Name: non-persistent-redis
	Container image: redis
	Named-volume with name: cache-control
	Mount path: /data/redis
It should launch in the pre-prod namespace and the volume MUST NOT be persistent.
答案:   沒有明確要求掛載在node主機上的具體位置,使用隨機位置emptyDir:{} ,如果明確掛載到主機的指定位置和地址,則使用hostPath.
1。創建pre-prod名稱空間
kubectl create ns pre-prod
2.創建yaml文件,如下:
apiVersion: v1
kind: Pod
metadata:
  name: non-presistent-redis
  namespace: pre-prod
spec:
  containers:
  - image: redis
    name: redis
    volumeMounts:
    - mountPath: /data/redis
      name: cache-control
  volumes:
  - name: cache-control
    emptyDir: {}

14.deploy scale

爲給定deploy 副本擴容到6

 kubectl scale deployment website --replicas=6

15.統計可調度node數

查看給定集羣ready的node個數(不包含NoSchedule)

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum
1.kubectl get node | grep -w  Ready | wc -l          ####grep -w是精確匹配
通過上面命令取得一個數N
2.通過下面命令取得一個數M
kubectl describe nodes | grep Taints | grep -I noschedule | wc -l
3.答案填寫N減去M得到的值  

16.kubectl top

找出指定ns中使用cup最高的pod名寫出到指定文件

From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the Pod consuming most CPU to the file /opt/cpu.txt (which already exists)
kubectc top pod -l name=cpu-utilizer --namespace=xxx 

17.svc dns

創建一個 deployment 名字爲:nginx-dns 路由服務名爲:nginx-dns 確保服務和 pod 可以通過各自的 DNS 記錄訪問 容器使用 nginx 鏡像,使用 nslookup 工具來解析 service 和 pod 的記錄並寫入相應的/opt/service.dns 和/opt/pod.dns 文件中,確保你使用 busybox:1.28 的鏡像用來測試。

 Create a deployment as follows
	Name: nginx-dns
	Exposed via a service: nginx-dns
	Ensure that the service & pod are accessible via their respective DNS records
	The container(s) within any Pod(s) running as a part of this deployment should use the nginx image
Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively.
Ensure you use the busybox:1.28 image(or earlier) for any testing, an the latest release has an unpstream bug which impacts thd use of nslookup.
  • busybox這裏找:https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
    
    第一步:創建deployment
    kubectl run nginx-dns --image=nginx
    第二步:發佈服務
    kubectl expose deployment nginx-dns --name=nginx-dns --port=80 --type=NodePort
    第三步:查詢podIP
    kubectl  get pods -o wide (獲取pod的ip)  比如Ip是:10.244.1.37 
    第四步:使用busybox1.28版本進行測試
    kubectl run busybox -it --rm --image=busybox:1.28 sh
    \#:/ nslookup nginx-dns     #####查詢nginx-dns的記錄
    \#:/ nslookup 10.244.1.37  #####查詢pod的記錄
    第五步:
    把查詢到的記錄,寫到題目要求的文件內,/opt/service.dns和/opt/pod.dns
    \####這題有疑義,乾脆把查到的結果都寫進去,給不給分靠天收,寫全一點。
    1。nginx-dns的
    echo 'Name: nginx-dns' >> /opt/service.dns
    echo 'Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local' >> /opt/service.dns
    2。pod的
    echo 'Name:      10.244.1.37' >> /opt/pod.dns
    echo 'Address 1: 10.244.1.37 10-244-1-37.nginx-dns.default.svc.cluster.local' >> /opt/pod.dns
    

18.etcd備份

etcd,給定https地址,ca,cert證書,key備份數據到指定目錄

Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db
The etcd instance is running etcd version 3.1.10
The following TLS certificates/key are supplied for connecting to the server with etcdctl
	CA certificate: /opt/KUCM00302/ca.crt
	Client certificate: /opt/KUCM00302/etcd-client.crt
	Clientkey:/opt/KUCM00302/etcd-client.key 
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379  --cacert=ca.pem --cert=server.pem --key=server-key.pem  snapshot save 給的路徑

有些題目會報錯,記得看etcdctl -h 裏的字段怎麼要求的

19.node維護(drain、cordon、uncordon)

在ek8s集羣中使name=ek8s-node-1節點不能被調度,並使已被調度的pod重新調度

Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.
先切換集羣到ek8
kubectl get nodes -l name=ek8s-node-1
kubectl drain wk8s-node-1  
#有人說遇到命令執行失敗,需要加以下參數,個人沒遇到
#--ignore-daemonsets=true --delete-local-data=true --force=true

20.node notReady

給定集羣中的一個node未處於ready狀態,解決該問題並具有持久性

A Kubernetes worker node, labelled with name=wk8s-node-0 is in state NotReady . Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
進入集羣
kubectl get nodes | grep NotReady
ssh node  
systemctl status kubelet
systemctl start kubelet   
systemctl enable kubelet

21.static pod --pod-manifest-path

題目很繞,大致是 在k8s的集羣中的node1節點配置kubelet的service服務,去拉起一個由kubelet直接管理的pod(說明了是靜態pod)

Configure the kubelet systemd managed service, on the node labelled with name=wk8s-node-1, to launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node.
該文件應該放置在/etc/kubernetes/manifest目錄下(給出了pod路徑)
	  1.vi /etc/kubernetes/manifest/static-pod.yaml
		定義一個POD
      2.systemctl status kubelet   查找kubelet.service路徑  
	  3.vi /etc/systemd/system/kubernetes.service   
	  	觀察有沒有 --pod-manifest-path=/etc/kubernetes/manifest 
	  	沒有就加上
	  4.ssh node  sudo -i
	  5.systemctl daemon-reload            systemctl restart kubelet.service
	  6.systemctl enable kubelet
      7.檢查  kubectl get pods -n kube-system | grep static-pod 
      	pod名字是service name + node ip
 
#注意kubelet.service路徑,可能不是上面路徑,可根據命令查看,但是有可能如下:
Drop-In: /lib/systemd/system/rc-local.service.d
           └─debian.conf
這裏定義了配置文件,所以如果直接在/etc/systemd/system/kubernetes.service裏面添加屬性並不生效。
#這部分內容請學習systemctl進行確認

22.集羣問題排查

給出了指定的集羣,該集羣中kubelet.service服務無法正常啓動,解決該問題,並具有持久性

 Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluster. Ensure that any changes are made permanently.
The worker node in this cluster is labelled with name=bk8s-node-0
情形一:kubectl 命令能用 
kubectl get cs 健康檢查  看manager-controller  是否ready   
如果不ready   systemctl start kube-manager-controller.service   
情形二:kubectl 命令不能用
2,ssh登陸到bk8 -master-0上檢查服務,如master上的4大服務,
api-server/schedule/controllor-manager/etcd
systemctl list-utils-files | grep controller-manager    沒有服務
systemctl list-utils-files | grep api-server       沒有服務
3,此刻進入/etc/kubernetes/manifest  文件夾中,可以看到api-server.yaml  controller-manager.yaml等4個文件,說明這幾個服務是以pod方式提供服務的。
4, systemctl status kubelet     看到是正常啓動的,
說明api-server   controlloer-manager    etcd    schedule  這幾個pod 沒啓動,
檢查靜態pod配置,在/var/lib/systemd/system/kubelet.service 這個文件裏檢查配置看到靜態配置路徑錯誤
考試環境把正確的/etc/kubernetes/manifest  換成了/etc/kubernetes/DODKSIYF 路徑,此路徑並不存在,把這個錯誤的路徑換成到存放api/controller-manager/etcd/schedule這幾個yaml文件存放的路徑,重啓Kubelet,排錯完成。
再查看node啥的,就OK了

23.TLS問題 (一道很長的題目,建議放棄,難度特別大)

24.pv創建

創建指定大小和路徑的pv mode ReadWriteOnce

Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  hostPath:
    path: /data

25、單主集羣搭建

要求在ubuntu上使用kubeadm搭建一主一從集羣,兩個node節點已給出,並且docker apt-get已安裝。

注意題目會提示執行kubeadm 命令時要添加參數 --ignore-preflight-errors=xxx

# master和node上:安裝kubeam kubelet kubectl

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

#master初始化
kubeadm init   --ignore-preflight-errors=xxx

#master安裝網絡
kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

#node加入集羣
kubeadm join  *** --ignore-preflight-errors=xxx

以上命令並不完整,僅僅提供思路,請使用kubeadm搭建集羣進行理解。

以上命令均在官方文檔中,鏈接如下:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

可在考試過程中進行copy

26、證書輪換

該題目個人沒遇到。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章