CentOS 系統手動搭建 k8s 環境

k8s 學習(1)——CentOS 系統搭建 k8s 環境


2021年3月21日 15:01 閱讀 3195 評論 1

最近準備系統地學習一下 k8s,所以第一件事就是搭建環境,本篇文章就來記錄一下自己在 CentOS 系統上搭建 k8s 環境的經歷。

環境準備

虛擬機信息:

  • 本地系統:win10
  • 虛擬機軟件:VirtualBox
  • Linux 系統:CentOS7
  • 虛擬機節點(至少2U2G):k8s-master(192.168.31.44)、k8s-node01、k8s-node02

k8s 相關信息:

  • docker 版本:docker-ce 18.09.9
  • k8s 版本:v1.16.0
  • 準備國內鏡像源

安裝 docker-ce(所有機器)

給所有的虛擬機都安裝 docker-ce,選擇的版本爲 18.09.9,這裏我使用了我之前寫的安裝 docker 的文章裏面使用的命令,直接寫成了腳本 install_docker.sh,內容如下:

#/bin/bash
# 使用root用戶安裝docker

DOCKER_VERSION=docker-ce-18.09.9-3.el7
DOCKER_REGISTRY=https://registry.docker-cn.com
YUN_REPO=http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 卸載原有的 docker
yum remove docker \
docker-ce \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine

# 清理殘留目錄
rm -rf /var/lib/docker
rm -rf /var/run/docker

# 添加阿里yum源,並更新yum索引
yum install -y yum-utils
yum-config-manager --add-repo ${YUN_REPO}
yum makecache fast

# 安裝docker-ce,可以自定義版本
yum install -y ${DOCKER_VERSION}

# 設置爲系統服務並啓動docker
systemctl enable docker && systemctl start docker

# 設置鏡像倉庫源
cat <<EOF >/etc/docker/daemon.json
{
 "registry-mirrors": ["${DOCKER_REGISTRY}"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 重啓docker
systemctl daemon-reload
systemctl restart docker

k8s 環境配置(所有機器)

安裝 docker-ce 之後,需要給虛擬機執行一些命令準備好環境,這些步驟我寫到腳本 k8s_env.sh 裏面了,內容如下:

#/bin/bash

# 關閉防火牆
systemctl disable firewalld
systemctl stop firewalld

# 關閉selinux
# 臨時禁用selinux
setenforce 0
# 永久關閉 修改/etc/sysconfig/selinux文件設置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 禁用交換分區
swapoff -a
# 永久禁用,打開/etc/fstab註釋掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab

# 修改內核參數
cat <<EOF >/etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

安裝k8s v1.16.0 (master節點)

安裝完 docker 並執行了 k8s 準備條件之後,需要在 k8s 主節點上面執行下面命令安裝 k8s,版本爲 v1.16.0,我將所有命令寫到腳本 k8s_install_master.sh 裏面了,這個腳本需要傳入一個參數,也就是主節點的IP地址,內容如下:

#/bin/bash

# 腳本執行必須傳入一個參數,就是k8s主節點IP地址
master_ip=$1
if [ "$master_ip" == "" ]; then
  echo "Error: please set master ip." &
  exit 210
else
  echo "master ip is $master_ip"
fi

K8S_BASEURL=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
K8S_GPGKEY="https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg"
POD_NETWORD=10.244.0.0

# 執行配置k8s阿里雲源
cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=${K8S_BASEURL}
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=${K8S_GPGKEY}
EOF

# 安裝kubeadm、kubectl、kubelet
yum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0

# 啓動kubelet服務
systemctl enable kubelet && systemctl start kubelet

# 安裝初始化鏡像,參數詳解查看文檔 https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address ${master_ip} --pod-network-cidr=${POD_NETWORD}/16 --token-ttl 0

# kubeadm init 執行完成之後需要執行的操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

主節點安裝完成後,可以看到如下輸出,最後一句就是給了一條命令,在 node 節點執行該命令就可以加入 k8s 集羣中,以後也可以通過在主節點執行命令來查詢 kubeadm token create --print-join-command

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.44:6443 --token i0nitb.9dx9gqqmjc25cl4v \
    --discovery-token-ca-cert-hash sha256:c7deb13d6544fe360e70fdc7b1c3140d2d5c1f98fc0e5d4c8deccadd91adf726

安裝k8s v1.16.0 (node 節點)

node 節點安裝 k8s 比主節點的內容稍微少一點,但是版本信息一樣,我寫成了腳本 k8s_install_node.sh,內容如下:

#/bin/bash

K8S_BASEURL=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
K8S_GPGKEY="https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg"

cat <<EOF >/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=${K8S_BASEURL}
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=${K8S_GPGKEY}
EOF

# 安裝kubeadm、kubectl、kubelet
yum install -y kubeadm-1.16.0-0 kubelet-1.16.0-0

# 啓動kubelet服務
systemctl enable kubelet && systemctl start kubelet

# 加入集羣,如果這裏不知道加入集羣的命令,可以登錄master節點,使用kubeadm token create --print-join-command 來獲取
#kubeadm join 192.168.31.213:6443 --token 7172cu.ouhrkxvv1wj5gdfr --discovery-token-ca-cert-hash sha256:b3613871b6812b09351f7501517149e48562927efc3a2f7b70f7bff29bfc697c

node 節點安裝完 k8s 之後,需要在主節點上執行命令 kubeadm token create --print-join-command 查詢出加入 k8s 集羣的命令,在 node 上面執行該命令即可加入到集羣中,如:

[root@k8s-master alex]# kubeadm token create --print-join-command
kubeadm join 192.168.31.44:6443 --token dbyk42.jt15226b6xd4rfvo     --discovery-token-ca-cert-hash sha256:c7deb13d6544fe360e70fdc7b1c3140d2d5c1f98fc0e5d4c8deccadd91adf726

到這裏,主節點和 node 節點都已經安裝好 k8s 了,這個時候在主節點執行命令 kubectl get nodes 即可查看到集羣的節點信息:

[root@k8s-master alex]# kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   13m   v1.16.0
k8s-node01   NotReady   <none>   97s   v1.16.0
k8s-node02   NotReady   <none>   7s    v1.16.0

這裏可以看到,1個主節點,兩個 node 節點,正是我這裏搭建的 k8s 集羣,但是這裏三個節點的狀態都是 NotReady,所以,很顯然,環境還沒有完全搭建好。

安裝 flannel(master節點)

k8s 安裝完之後狀態是 NotReady 這個可以當做一個問題去網上搜索解決方案,我這裏寫的也只是我看到的解決方案之一,也就是通過安裝 flannel 來解決。

其實就是需要一個 yaml 文件,官方地址是 <https%3a raw.githubusercontent.com="" coreos="" flannel="" master="" documentation="" kube-flannel.yml="">,官方地址國內訪問不了,網上也可以搜索到這個文件,但是文件裏的鏡像地址有的被人改了地址(改成了國內源),我這裏改回了原版地址,因爲原版地址我本地可以訪問。

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: [ 'NET_ADMIN' ]
  defaultAddCapabilities: [ ]
  requiredDropCapabilities: [ ]
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
    - min: 0
      max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: [ 'extensions' ]
    resources: [ 'podsecuritypolicies' ]
    verbs: [ 'use' ]
    resourceNames: [ 'psp.flannel.unprivileged' ]
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
  - kind: ServiceAccount
    name: flannel
    namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: 
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章