kubeadm使用本地鏡像安裝kubernetes 1.15.1全過程

注意:因爲本環節用於測試istio,所以沒有采用集羣環節部署,只使用一臺虛擬機,但這個安裝方式是集羣就緒的

1. 環境準備

操作系統:Centos7.5 200g hdd, 8g mem
kubernetes: 1.15.1
docker: ce 19.03.1
ip:10.0.135.30
hostname:centos75

1.1 設置主機名和hosts

hostnamectl set-hostname centos75
echo "10.0.135.30">>/etc/hosts

1.2 關閉防火牆和selinux

systemctl stop firewalld    //關閉防火牆
systemctl disable firewalld //禁止防火牆自動啓動

setenforce 0        //關閉selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

1.3關閉swap交換分區

swapoff -a
vi /etc/fstab
//註釋掉swap分區
#/dev/mapper/centos_centos75-swap swap
//加載br_netfilter
modprobe br_netfilter

1.4配置內核參數

cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

生效文件

sysctl -p /etc/sysctl.d/k8s.conf

修改linux資源配置文件,調高ulimit最大打開文件數和systemctl管理的服務文件打開最大數

echo "* soft nofile 655360" >> /etc/security/limits.conf
echo "* hard nofile 655360" >> /etc/security/limits.conf
echo "* soft nproc 655360" >> /etc/security/limits.conf
echo "* hard nproc 655360" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf

1.4配置yum源

首先備份原yum源文件

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup

從aliyun網站下載最新yum源配置

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

1.5配置kubernetes安裝下載源

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

1.6安裝kubeadm的一些依賴包

yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

##1.7安裝時間同步服務器

yum install chrony –y 
systemctl enable chronyd.service 
systemctl start chronyd.service 
//查看chrony狀態
systemctl status chronyd.service chronyc sources

1.8 配置節點之間的互信

配置節點互信後,是可以免密碼ssh連接服務器

//使用默認配置就可以了
ssh-keygen -t rsa -C "[email protected]"
//生成完成後,rsa文件拷貝到其他節點
ssh-copy-id 其他節點名

1.9 重啓系統

reboot now

2 安裝docker

2.1 配置docker yum

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

2.2 查看當前最新docker-ce版本

yum list docker-ce --showduplicates | sort -r

2.3 安裝最新版本的docker-ce

yum install -y docker-ce-19.03.1-3.el7
systemctl start docker
systemctl enable docker.service

2.4 配置docker images鏡像加速器

cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://heusyzko.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

3 安裝kubeadm、kubelet、kubectl文件

  • kubeadm是Kubernetes官方提供的快速部署集羣生產環境的工具,可惜需要科學上網真正快速方便。
  • kubelet是Kubernetes集羣所有節點都需要運行的組件,負責管理pos、容器的創建、監測、銷燬等
  • kubectl是Kubernetes的集羣管理命令行工具,基本上所有的集羣管理工作都可以通過這個工具完成,實質上這個工具是一個kubernetes api server的客戶端
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 
//啓動kubelet 
systemctl enable kubelet && systemctl start kubelet

4 準備安裝kubernetes的鏡像

4.1 查看安裝需要的鏡像

kubeadm在安裝時需要從k8s.io官網下載一些必要的docker images,因爲科學上網原因,會一直卡在拉取images階段,因此,我們需要先手動下載必要的Images。
從1.11開始,kubeadm可以列出images列表

kubeadm config images list

4.2 生成kubeadm配置文件,修改配置文件裏的images下載源、是否允許Master允許業務Pod、網絡配置信息

kubeadm config print init-defaults > kubeadm.conf

vi kubeadm.conf
//修改默認kubeadm初始化參數如下
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.15.1
nodeRegistration:
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
localAPIEndpoint:
  advertiseAddress: 10.0.135.30
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 172.18.0.0/16
 

4.3 使用自定義配置拉取images

kubeadm config images pull --config kubeadm.conf

4.4 修改pull images的tag

從國內鏡像網站下載的Images需要re tag到k8s.gcr.io

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.1 k8s.gcr.io/kube-scheduler:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.1 k8s.gcr.io/kube-proxy:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1 k8s.gcr.io/kube-controller-manager:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.1 k8s.gcr.io/kube-apiserver:v1.15.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1

確認retag成功

docker images

5 部署master節點

5.1 初始化kubeadm

kubeadm init --config /root/kubeadm.conf

5.2 初始化結果如下

[root@centos75 ~]# kubeadm init --config /root/kubeadm.conf
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [centos75 localhost] and IPs [10.0.135.30 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [centos75 localhost] and IPs [10.0.135.30 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [centos75 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.18.0.1 10.0.135.30]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 40.503786 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node centos75 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node centos75 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.135.30:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:65bcdce5c078644662e51c5238def0ee935060267a38a1365aaa82d2dfc8b5cc
[root@centos75 ~]#

5.3 查看是否生成了kubernetes文件

[root@centos75 ~]# ls /etc/kubernetes/ -l
總用量 36
-rw-------  1 root root 5451 8月  21 09:31 admin.conf
-rw-------  1 root root 5483 8月  21 09:31 controller-manager.conf
-rw-------  1 root root 5471 8月  21 09:31 kubelet.conf
drwxr-xr-x. 2 root root  113 8月  21 09:31 manifests
drwxr-xr-x  3 root root 4096 8月  21 09:31 pki
-rw-------  1 root root 5431 8月  21 09:31 scheduler.conf

5.4 根據上一部kubeadm結果提示配置kubectl配置文件

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.5 驗證kubeadm安裝是否成功

[root@centos75 .kube]# kubectl get po --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-6967fb4995-c5jjw           0/1     Pending   0          13h
kube-system   coredns-6967fb4995-r24rh           0/1     Pending   0          13h
kube-system   etcd-centos75                      1/1     Running   0          12h
kube-system   kube-apiserver-centos75            1/1     Running   0          12h
kube-system   kube-controller-manager-centos75   1/1     Running   0          12h
kube-system   kube-proxy-mms7k                   1/1     Running   0          13h
kube-system   kube-scheduler-centos75            1/1     Running   0          12h
[root@centos75 .kube]#

驗證集羣是否成功

[root@centos75 .kube]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@centos75 .kube]#

6 部署網絡(Calico)

Calico是一個純三層網絡,通過linux 內核的L3 forwarding來實現vRouter功能,區別於fannel等需要封包解包的網絡協議

6.1 下載calico images

docker pull calico/node:v3.8.2 
docker pull calico/cni:v3.8.2
docker pull calico/typha:v3.8.2

6.2 修改images的tag

docker tag calico/node:v3.8.2 quay.io/calico/node:v3.8.2
docker tag calico/cni:v3.8.2 quay.io/calico/cni:v3.8.2
docker tag calico/typha:v3.8.2 quay.io/calico/typha:v3.8.2

docker rmi calico/node:v3.8.2
docker rmi calico/cni:v3.8.2
docker rmi calico/typha:v3.8.2

6.3 安裝calico

//新版的calico已將rbac配置信息整合在calico.yaml中
curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O
POD_CIDR="192.168.0.0/16" 
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml
sed -i -e "s?typha_service_name: \"none\"?typha_service_name: \"calico-typha\"?g" calico.yaml
sed -i -e "s?replicas: 1?replicas: 2?g" calico.yaml
kubectl apply -f calico.yaml


6.4 確認calico安裝完成

kubectl get pods --all-namespaces
kubectl get nodes
發佈了7 篇原創文章 · 獲贊 2 · 訪問量 1581
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章