Ubuntu(18.04)k8s安裝教程
-
更換docker源
cat <<EOF> /etc/docker/daemon.json { "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"] } EOF
-
重啓docker
systemctl daemon-reload && systemctl restart docker
-
禁用swap
# 編輯分區配置文件/etc/fstab,註釋掉有swap那一行 # 注意修改完畢之後需要重啓linux服務 vim /etc/fstab /swap.img none swap sw 0 0 # /swap.img none swap sw 0 0
-
重啓系統
reboot
-
添加k8s阿里雲apt源及key並更新apt
# 可以打開 /etc/apt/sources.list 文件,添加一行 deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
apt install curl -y && curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add && apt update
-
安裝kubectl、kubelet、kubeadm(1.17.17)
apt install kubeadm=1.17.17-00 kubelet=1.17.17-00 kubectl=1.17.17-00
-
拉取k8s基礎鏡像,由於GFW原因google原生鏡像無法拉取,故從docker_hub拉取備份鏡像並重新打tag
# pull_k8s.sh images=( kube-apiserver:v1.17.17 kube-controller-manager:v1.17.17 kube-scheduler:v1.17.17 kube-proxy:v1.17.17 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 ) for imageName in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
-
添加shell腳本 執行權限並執行腳本
chmod +x pull_k8s.sh && bash pull_k8s.sh
-
執行kubelet命令測試kubelet是否可以正常運行
kubelet # 如果沒有異常報錯則證明kubelet將會正常運行
-
執行以下命令啓動k8s master節點
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.17 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
如果出現以下提示,則證明k8s master節點啓動成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.102:6443 --token dgt6ww.466onmm47x5hifwl \
--discovery-token-ca-cert-hash sha256:7033f3d2597cfe04ecdc0c97c0908ca46f749c4c1e4b7b0e6ddc59fe51aa1cc4
-
複製配置文件
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
查看k8s master節點各controller組件運行狀態
kubectl get cs
如果發現scheduler和controller-manager爲Unhealthy狀態則進行以下操作
vim /etc/kubernetes/manifests/kube-controller-manager.yaml vim /etc/kubernetes/manifests/kube-scheduler.yaml 需註釋掉- --port=0這行參數
註釋完成後,稍等片刻再次查詢
kubectl get cs
即正常運行 -
查看k8s master節點各個Pod的運行狀態
kubectl get pod -n kube-system
如果發現coredns-xx-xx的狀態爲Pending,則說明kube-flannel未安裝運行,執行以下命令
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果提示
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
,則是網絡問題,創建以下文件並啓動即可。# kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: quay.io/coreos/flannel:v0.15.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.15.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
執行
kubectl apply -f kube-flannel.yml
待出現以下信息時候,稍等片刻coredns-xx-xx兩個Pod就會正常運行
podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
-
根據配置式啓動一個Pod
# nginx.yaml apiVersion: v1 kind: Namespace metadata: name: dev --- apiVersion: v1 kind: Pod metadata: name: nginxpod namespace: dev spec: containers: - name: nginx-containers image: nginx:latest
執行
kubectl apply -f nginx.yaml
則會創建一個Pod如果是在master節點上創建Pod則需要去除master節點污點才能調度Pod到master上,master節點默認是禁用調度Pod到master的,執行以下命令是去除所有節點的所有污點
kubectl taint nodes --all node-role.kubernetes.io/master-
-
執行
kubectl get pod -n dev
出現創建的Pod並Running後證明創建並運行成功