kubernetes1.11手動搭建
本次實驗手動搭建一個內部的k8s集羣,即不進行認證:
1.通過vagrant和virtual box 構建vm.
2.設計爲一個etcd和一個node,初步,master先不搭建node,即目前一個master和一個
node.
pre
一直想找一篇簡單的手動搭建k8s的教程(不進行認證),以初步學習k8s, 形成一個 簡單的框架.結果,不是沒有對應版本,就是採用一些工具自動搭建,最後還是選擇 帶來的問題.
cluster info
node1爲master,node2爲集羣的工作節點node.
name ip master/node1 192.168.59.11 node2 192.168.59.12 搭建過程
簡述
1.獲取所需要的kubernetes二進制文件,這裏採用編譯源碼的方式進行
2.獲取etcd二進制文件,這裏採用編譯源碼方式進行
3.啓動虛擬機,這裏採用virtualbox、vagrant、ubuntu16.04作爲宿主機
4.master節點上配置etcd、kube-apiserver、kube-controller-manager、
kube-scheduler
5.node2節點上配置kubelet、kube-proxy
6.檢測集羣環境,master上運行kubectl get nodes
查看集羣運行狀態具體搭建參考:
結果:
root@node1:/etc/kubernetes# kubectl get nodes NAME STATUS ROLES AGE VERSION node2 Ready <none> 1h v1.11.3-beta.0.3+798ca4d3ceb5b2
QA
- Q: k8s版本變化加大,參考內容爲1.8之前的,很多啓動參數發生了變化
A: 1.8+k8s的kubelet的–api-server參數取消,採用kubelet.kubeconfig文件的形式
- kubelet啓動參數
#kubelet.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet $KUBELET_ARGS $KUBELET_ADDRESS #/etc/kubernetes/kubelet KUBELET_ADDRESS="--address=192.168.59.12" KUBELET_ARGS="--kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \ --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
- bootstrap.kubeconfig(若需要認證,則有ssl等生成)
apiVersion: v1 clusters: - cluster: certificate-authority-data: server: https://192.168.59.11:8080 name: kubernetes contexts: - context: cluster: kubernetes user: kubelet-bootstrap name: default current-context: default kind: Config preferences: {} users: - name: kubelet-bootstrap user: token:
- Q: master上
kubectl get node
返回結果No resources found.
A: 這裏出現原因是沒有認證通過,本實驗是在沒有認證環境下進行的,但是
bootstrap.kubeconfig中的server地址是需要認證的https,這裏改成http即可訪問kubectl get nodes
root@node1:/etc/kubernetes# kubectl get nodes NAME STATUS ROLES AGE VERSION node2 Ready <none> 1h v1.11.3-beta.0.3+798ca4d3ceb5b2
解決過程
查看kubelet服務的錯誤日誌
kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2018-08-25 13:49:07 UTC; 12h ago Main PID: 14611 (kubelet) Tasks: 12 Memory: 43.4M CPU: 42.624s CGroup: /system.slice/kubelet.service └─14611 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --address=192.168.59.12 Aug 26 02:13:14 node2 kubelet[14611]: E0826 02:13:14.960652 14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.59.11:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client Aug 26 02:13:14 node2 kubelet[14611]: E0826 02:13:14.966460 14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client Aug 26 02:13:15 node2 kubelet[14611]: E0826 02:13:15.016605 14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.59.11:8080/api/v1/services?limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client Aug 26 02:13:15 node2 kubelet[14611]: E0826 02:13:15.963891 14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.59.11:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
這裏出現訪問失敗的錯誤,需要測試一下接口:
https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2
http 測試接口:出現ssl錯誤,即認證問題,改爲http,測試
http https://192.168.59.11:8080/…
root@node2:/etc/systemd/system# http https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0 [1] 22638 [2] 22639 root@node2:/etc/systemd/system# http: error: SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590) [1]- Exit 1 http https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2 [2]+ Done limit=500 root@node2:/etc/systemd/system#
http http://192.168.59.11:8080/… ,得到了相應
root@node2:/etc/systemd/system# http http://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0 [1] 22833 [2] 22834 root@node2:/etc/systemd/system# HTTP/1.1 200 OK Content-Type: application/json Date: Sun, 26 Aug 2018 03:53:20 GMT Transfer-Encoding: chunked { "apiVersion": "v1", "items": [ { "metadata": { "annotations": { "node.alpha.kubernetes.io/ttl": "0", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }, "creationTimestamp": "2018-08-26T02:33:27Z", "labels": { "beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", "kubernetes.io/hostname": "node2" }, "name": "node2", "resourceVersion": "15170", "selfLink": "/api/v1/nodes/node2", "uid": "69b8f2ca-a8d8-11e8-a889-02483e15b50c" }, "spec": {}, "status": { "addresses": [ { "address": "192.168.59.12", "type": "InternalIP" }, { "address": "node2", "type": "Hostname" } ], "allocatable": { "cpu": "1", "ephemeral-storage": "9306748094", "hugepages-2Mi": "0", "memory": "1945760Ki", "pods": "110" }, "capacity": { "cpu": "1", "ephemeral-storage": "10098468Ki", "hugepages-2Mi": "0", "memory": "2048160Ki", "pods": "110" }, "conditions": [ { "lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:11Z", "message": "kubelet has sufficient disk space available", "reason": "KubeletHasSufficientDisk", "status": "False", "type": "OutOfDisk" }, { "lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:11Z", "message": "kubelet has sufficient memory available", "reason": "KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure" }, { "lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:11Z", "message": "kubelet has no disk pressure", "reason": "KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure" }, { "lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T02:33:27Z", "message": "kubelet has sufficient PID available", "reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure" }, { "lastHeartbeatTime": "2018-08-26T03:53:13Z", "lastTransitionTime": "2018-08-26T03:15:21Z", "message": "kubelet is posting ready status. AppArmor enabled", "reason": "KubeletReady", "status": "True", "type": "Ready" } ], "daemonEndpoints": { "kubeletEndpoint": { "Port": 10250 } }, "nodeInfo": { "architecture": "amd64", "bootID": "f4cb0a01-e5b9-4851-83d9-ea6556bd285e", "containerRuntimeVersion": "docker://17.3.2", "kernelVersion": "4.4.0-133-generic", "kubeProxyVersion": "v1.11.3-beta.0.3+798ca4d3ceb5b2", "kubeletVersion": "v1.11.3-beta.0.3+798ca4d3ceb5b2", "machineID": "fe02b8afeb1041cfa61a6b1d40371316", "operatingSystem": "linux", "osImage": "Ubuntu 16.04.5 LTS", "systemUUID": "98A4443F-059B-462C-900A-AFA32971670D" } } } ], "kind": "NodeList", "metadata": { "resourceVersion": "15179", "selfLink": "/api/v1/nodes" } } [1]- Done http http://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2 [2]+ Done limit=500 root@node2:/etc/systemd/system#
master上測試
kubectl get node
,查到資源root@node1:/etc/kubernetes# kubectl get nodes NAME STATUS ROLES AGE VERSION node2 Ready <none> 1h v1.11.3-beta.0.3+798ca4d3ceb5b2
綜上,因爲k8s版本的變化,啓動參數變化,按照舊版資料搭建集羣,會出現一 些問題,這裏解決的就是kubelet的--api-server變化.建議,搭建過程參考 經典資料,但是出錯後,一定查看對應版本的官網文檔.
接下來
- 在集羣上運行demo
- 加上認證機制