記一次詳細安裝 kubernetes、istio 步驟

寫在前面

先說一下我的機器配置,熱乎乎的裸機,一共三臺配置如下

  • 10.20.1.103 4C 8G 磁盤 50G node4 master centos7
  • 10.20.1.104 4C 8G 磁盤 50G node5 node centos7
  • 10.20.1.105 4C 8G 磁盤 50G node6 node centos7

不要忘記修改你的主機名以及 hosts 文件映射,不然你的主機名就會全是 localhost…而且如果全是 localhost 的話使用 kubeadm 加入集羣時也會出現 bug。

我的安裝方法是基於 k8s 官方的推薦方法 kubeadm,istio 同樣也是,如果想看原文訪問以下連接。

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://istio.io/docs/setup/getting-started/

我在這裏默認你已經安裝好了 docker
基本就這些了,下面就開始了!

kubernetes

step1 安裝常用應用

yum install -y vim wget

step2 驗證機器

驗證你的每個節點是否唯一,一般新的機器都沒有任何問題。

ip link
sudo cat /sys/class/dmi/id/product_uuid

step3 開放端口

每臺機器開放端口不用多說

// master 執行
[root@localhost ~]# firewall-cmd --zone=public --add-port=6443/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=2379/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=2380/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=10250/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=10251/tcp --permanent
success
[root@localhost ~]# firewall-cmd --zone=public --add-port=10252/tcp --permanent
success
[root@localhost ~]# firewall-cmd --reload
success

// node 執行
[root@localhost ~]# firewall-cmd --zone=public --add-port=10250/tcp --permanent
success
[root@localhost ~]# firewall-cmd --reload
success

step4 安裝組件

安裝 kubeadm, kubelet 和 kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

截止到上一步,如果你不修改你上網的姿勢你是不會安裝成功的,失敗如下圖。這和正常。

[root@localhost ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.huaweicloud.com
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2404:6800:4012::200e: Network is unreachable"
Trying other mirror.
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#7 - "Failed to connect to 2404:6800:4012::200e: Network is unreachable"
Trying other mirror.
^Chttps://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno 14] curl#56 - "Callback aborted"
Trying other mirror.

因爲我們訪問不到 google,所以我們修改一下源,去其他路徑下載,修改之後如下面代碼塊所示:實際只是註釋了兩行代碼而已。

vim /etc/yum.repos.d/kubernetes.repo
name=Kubernetes
# baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
# gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*

驗證安裝,可以看到我們已經安裝成功了

[root@localhost ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:04:32Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@localhost ~]# kubelet --version
Kubernetes v1.17.1
[root@localhost ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.1", GitCommit:"d224476cd0730baca2b6e357d144171ed74192d6", GitTreeState:"clean", BuildDate:"2020-01-14T21:02:14Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

啓動 kubelet

systemctl enable --now kubelet
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
systemctl daemon-reload
systemctl restart kubelet

截止到這裏已經成功一半了。下一步是啓動集羣。

step5 啓動集羣

// 先啓動 master
[root@localhost ~]# kubeadm init --kubernetes-version=v1.17.0
W0117 17:16:23.316968   11538 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0117 17:16:23.317084   11538 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or highe

我們執行命令之後,可以看到一些警告以及錯誤,那麼我們執行以下命令,關閉防火牆以及交換分區

systemctl stop firewalld
swapoff -a

再次執行可以看到只有一個警告,這個警告我們修改 docker 的驅動即可。如下圖所示,修改之後重啓 docker

[root@node1 ~]# vim /etc/docker/daemon.json 

{
 "exec-opts":["native.cgroupdriver=systemd"]
}

重啓之後我們再次執行,此時肯定會再次失敗。很簡單可安裝時的失敗是一致的。我們訪問不到 google。也就是我們 pull 不下來鏡像。

[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.17.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.5: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1

所以我們現在的任務就是解決下面這些依賴的鏡像即可。

[root@node1 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.17.0             7d54289267dc        5 weeks ago         116MB
k8s.gcr.io/kube-scheduler            v1.17.0             78c190f736b1        5 weeks ago         94.4MB
k8s.gcr.io/kube-apiserver            v1.17.0             0cae8d5cc64c        5 weeks ago         171MB
k8s.gcr.io/kube-controller-manager   v1.17.0             5eb3b7486872        5 weeks ago         161MB
k8s.gcr.io/coredns                   1.6.5               70f311871ae1        2 months ago        41.6MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        2 months ago        288MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB

這裏我認爲應該是最麻煩的一步,方法有很多,簡單說幾條:

  • 使用阿里雲的鏡像然後修改名稱
  • 去 dockerhub 撿垃圾
  • 給博主點贊留言
  • 開動自己的小腦筋…

接着執行啓動集羣命令,以下只是一段日誌,可以看到已經啓動成功了。

kubeadm init --kubernetes-version=v1.17.0
......
......
......
[bootstrap-token] Using token: zlkmry.z97kut9e1ezjgetm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.20.1.103:6443 --token zlkmry.z97kut9e1ezjgetm \
    --discovery-token-ca-cert-hash sha256:ebe4e4f09e46e4b1fff8e6d81fdc9854fb23b64d05258d52e4afbd84136bb4c7

啓動節點這一步,如果你修改了主機名卻沒有配置 hosts 映射就會拋出以下異常,這是正常的,你需要修改映射。

	[WARNING Hostname]: hostname "node4" could not be reached
	[WARNING Hostname]: hostname "node4": lookup node6 on 8.8.8.8:53: no such host

master 節點執行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

剩餘節點加入集羣

kubeadm join 10.20.1.103:6443 --token zlkmry.z97kut9e1ezjgetm \
    --discovery-token-ca-cert-hash sha256:ebe4e4f09e46e4b1fff8e6d81fdc9854fb23b64d05258d52e4afbd84136bb4c7

從輸出命令我們可以看到,1:我們集羣的網絡沒有初始化,所以每個節點是 NotReady 狀態,2:我們兩個子節點的鏡像沒有 pull 下來。集羣網絡我們使用 calico 在下一步解決,集羣鏡像我們直接把先前 master 的鏡像使用 docker save 命令壓縮以下然後直接發送到另外兩個子節點使用 docker loca 命令直接解壓,這樣解決了兩個子節點鏡像的問題!

[root@localhost ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE    VERSION
node4   NotReady   master   7m6s   v1.17.1
node5   NotReady   <none>   79s    v1.17.1
node6   NotReady   <none>   69s    v1.17.1
[root@localhost ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                            READY   STATUS              RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
kube-system   coredns-6955765f44-qft45        0/1     Pending             0          6m56s   <none>        <none>   <none>           <none>
kube-system   coredns-6955765f44-wtzp2        0/1     Pending             0          6m56s   <none>        <none>   <none>           <none>
kube-system   etcd-node4                      1/1     Running             0          6m43s   10.20.1.103   node4    <none>           <none>
kube-system   kube-apiserver-node4            1/1     Running             0          6m43s   10.20.1.103   node4    <none>           <none>
kube-system   kube-controller-manager-node4   1/1     Running             0          6m43s   10.20.1.103   node4    <none>           <none>
kube-system   kube-proxy-4cx77                0/1     ContainerCreating   0          71s     10.20.1.105   node6    <none>           <none>
kube-system   kube-proxy-lw9xf                1/1     Running             0          6m56s   10.20.1.103   node4    <none>           <none>
kube-system   kube-proxy-zfzqj                0/1     ContainerCreating   0          80s     10.20.1.104   node5    <none>           <none>
kube-system   kube-scheduler-node4            1/1     Running             0          6m43s   10.20.1.103   node4    <none>           <none>

創建集羣網絡,這裏使用的是 calico 網絡

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

到了這一步,你的 k8s 集羣應該是已經正常的了,如下所示,下一步就是安裝 istio!

[root@localhost ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node4   Ready    master   23m   v1.17.1
node5   Ready    <none>   17m   v1.17.1
node6   Ready    <none>   17m   v1.17.1
[root@localhost ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-5c45f5bd9f-wx2g5   1/1     Running   0          5m16s   192.168.139.2   node6   <none>           <none>
kube-system   calico-node-7wthb                          1/1     Running   0          5m16s   10.20.1.103     node4   <none>           <none>
kube-system   calico-node-fzdcj                          1/1     Running   0          5m16s   10.20.1.105     node6   <none>           <none>
kube-system   calico-node-glf8p                          1/1     Running   0          5m16s   10.20.1.104     node5   <none>           <none>
kube-system   coredns-6955765f44-qft45                   1/1     Running   0          23m     192.168.139.1   node6   <none>           <none>
kube-system   coredns-6955765f44-wtzp2                   1/1     Running   0          23m     192.168.139.3   node6   <none>           <none>
kube-system   etcd-node4                                 1/1     Running   0          23m     10.20.1.103     node4   <none>           <none>
kube-system   kube-apiserver-node4                       1/1     Running   0          23m     10.20.1.103     node4   <none>           <none>
kube-system   kube-controller-manager-node4              1/1     Running   0          23m     10.20.1.103     node4   <none>           <none>
kube-system   kube-proxy-4cx77                           1/1     Running   0          17m     10.20.1.105     node6   <none>           <none>
kube-system   kube-proxy-lw9xf                           1/1     Running   0          23m     10.20.1.103     node4   <none>           <none>
kube-system   kube-proxy-zfzqj                           1/1     Running   0          17m     10.20.1.104     node5   <none>           <none>
kube-system   kube-scheduler-node4                       1/1     Running   0          23m     10.20.1.103     node4   <none>           <none>

istio

step1 下載 istio

下載 istio 穩定版,截止到我安裝版本爲 1.4.3

curl -L https://istio.io/downloadIstio | sh -

step2 啓動 istio

啓動 istio,模式爲:demo

./istioctl manifest apply --set profile=demo

以下爲輸出:

- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Tracing...
- Applying manifest for component IngressGateway...
- Applying manifest for component EgressGateway...
- Applying manifest for component Policy...
- Applying manifest for component Galley...
- Applying manifest for component Citadel...
- Applying manifest for component Telemetry...
- Applying manifest for component Injector...
- Applying manifest for component Prometheus...
- Applying manifest for component Pilot...
- Applying manifest for component Kiali...
- Applying manifest for component Grafana...
✔ Finished applying manifest for component Injector.
✔ Finished applying manifest for component Prometheus.
✔ Finished applying manifest for component Galley.
✔ Finished applying manifest for component Tracing.
✔ Finished applying manifest for component Pilot.
✔ Finished applying manifest for component IngressGateway.
✔ Finished applying manifest for component Citadel.
✔ Finished applying manifest for component Policy.
✔ Finished applying manifest for component Kiali.
✔ Finished applying manifest for component EgressGateway.
✔ Finished applying manifest for component Grafana.
✔ Finished applying manifest for component Telemetry.


✔ Installation complete

查看 pod,istio 所需鏡像可以直接 pull 下來,不用拐彎抹角的麻煩。這裏網速慢得估計需要等個三五分鐘 pod 纔會啓動成功!

[root@localhost ~]# kubectl get pods -n istio-system -o wide
NAME                                      READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
grafana-6b65874977-dr9gc                  1/1     Running   0          9m36s   192.168.139.8    node6   <none>           <none>
istio-citadel-f78ff689-dvghc              1/1     Running   0          9m38s   192.168.139.5    node6   <none>           <none>
istio-egressgateway-7b6b69ddcd-gzp4n      0/1     Running   0          9m44s   192.168.33.131   node5   <none>           <none>
istio-galley-69674cb559-w7lhq             1/1     Running   0          9m40s   192.168.33.135   node5   <none>           <none>
istio-ingressgateway-649f9646d4-86snz     0/1     Running   0          9m42s   192.168.33.132   node5   <none>           <none>
istio-pilot-7989874664-nspww              0/1     Running   0          9m40s   192.168.139.4    node6   <none>           <none>
istio-policy-5cdbc47674-5xx24             1/1     Running   6          9m38s   192.168.139.7    node6   <none>           <none>
istio-sidecar-injector-7dd87d7989-9984l   1/1     Running   0          9m41s   192.168.33.133   node5   <none>           <none>
istio-telemetry-6dccd56cf4-4h2pd          1/1     Running   7          9m40s   192.168.33.130   node5   <none>           <none>
istio-tracing-c66d67cd9-c8z89             1/1     Running   0          9m42s   192.168.33.129   node5   <none>           <none>
kiali-8559969566-bpnn4                    1/1     Running   0          9m37s   192.168.139.6    node6   <none>           <none>
prometheus-66c5887c86-zg26g               1/1     Running   0          9m41s   192.168.33.134   node5   <none>           <none>

step3 自動注入

爲 default 命名空間開啓自動注入:

kubectl label namespace default istio-injection=enabled

step4 部署示例

啓動 bookinfo 示例,bookinfo 的 yaml 文件我們已經跟隨 istio 下載下來了

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

查看結果,啓動比較慢需要一兩分鐘

[root@localhost istio-1.4.3]# kubectl get pod -n default
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-78d78fbddf-zhz6h       2/2     Running   0          5m31s
productpage-v1-596598f447-x4tnb   2/2     Running   0          5m31s
ratings-v1-6c9dbf6b45-9mv64       2/2     Running   0          5m32s
reviews-v1-7bb8ffd9b6-7dtbg       2/2     Running   0          5m32s
reviews-v2-d7d75fff8-bmlkv        2/2     Running   0          5m32s
reviews-v3-68964bc4c8-5sxpr       2/2     Running   0          5m32s

創建入口網關:

kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

返回結果:

[root@localhost istio-1.4.3]# kubectl get gateway -n default
NAME               AGE
bookinfo-gateway   6m52s
[root@localhost istio-1.4.3]# kubectl get vs -n default
NAME       GATEWAYS             HOSTS   AGE
bookinfo   [bookinfo-gateway]   [*]     6m57s

創建成功之後,嘗試從網頁訪問:
在這裏插入圖片描述
截止到這裏,詳細的部署流程已經全部貼出,又遇到什麼問題歡迎留言,如果對大家有用的話不要忘記點贊哦!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章