環境要求:centos7
測試環境:1Master、3Node,測試使用的Kubernetes集羣可由一個master主機及一個以上(建議至少兩個)node主機組成,這裏我所使用3個node主機,且每臺主機分別擁有4核心的cpu和1g內存(我所給的,若你電腦強大可以給4g),這些主機可以是物理服務器,也可以運行於vmware虛擬平臺上的虛擬機。
tip:保證master節點和node節點可直接訪問互聯網,centos環境就不寫教程了,後期補上
1,設置時鐘同步(分別在4個節點查詢date)
[root@master ~]# date
2019年 09月 18日 星期三 18:32:58 CST
tip:發現時間並不統一,每個節點分別執行以下命令同步時間並檢查
[root@master ~]# systemctl status chrinyd
Unit chrinyd.service could not be found.
[root@master ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since 三 2019-09-18 16:35:09 CST; 2h 2min ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 6257 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 6240 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 6244 (chronyd)
Tasks: 1
Memory: 1.2M
CGroup: /system.slice/chronyd.service
└─6244 /usr/sbin/chronyd
9月 18 16:35:09 master systemd[1]: Started NTP client/server.
9月 18 16:35:47 master chronyd[6244]: Selected source 144.76.76.107
9月 18 16:35:55 master chronyd[6244]: Source 144.76.76.107 replaced with 119.28.206.193
9月 18 16:35:56 master chronyd[6244]: Can't synchronise: no selectable sources
9月 18 16:36:00 master chronyd[6244]: Selected source 193.182.111.143
9月 18 16:36:52 master chronyd[6244]: Can't synchronise: no selectable sources
9月 18 16:36:54 master chronyd[6244]: Selected source 193.182.111.14
9月 18 16:39:02 master chronyd[6244]: Selected source 119.28.206.193
9月 18 16:40:20 master chronyd[6244]: Selected source 193.182.111.143
9月 18 16:41:11 master chronyd[6244]: Selected source 119.28.206.193
2,主機名稱解析,在各節點修改hosts文件
[root@master ~]# cat /etc/hosts
192.168.200.129 master.magedu.com master
192.168.200.130 node01.magedu.com node01
192.168.200.131 node02.magedu.com node02
192.168.200.132 node03.magedu.com node03
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
將你各節點ip與主機名對應
3,各節點關閉iptables和firewalld服務
[root@master ~]# systemctl status firwalld
Unit firwalld.service could not be found.
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
[root@master ~]# systemctl status iptable
tip:iptable默認沒有,有的話就需要關閉
4,各節點關閉並禁用selinux
編輯/etc/syscinfig/selinux文件,改成SELINUX=disabled
[root@master ~]# vi /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
tip:修改之後並不會生效,需要reboot重啓,使用命令查詢狀態
[root@master ~]# getenforce
Disabled
[root@master ~]#
5,禁用Swap設備(此步驟我並沒有禁用,但後面需要配置錯誤)
查看Swap
[root@master ~]# free -m
total used free shared buff/cache available
Mem: 1819 714 113 9 991 814
Swap: 2047 2 2045
[root@master ~]#
永久關閉Swap,在Swap前註釋
[root@master ~]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Sep 17 01:06:07 2019
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=408e00c4-8bd6-46a6-ba00-02f54432af0a /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
tip:也需要重啓生效
6,啓用ipvs內核模塊(可暫時不做)
創建內核模塊載入相關文件/etc/sysconfig/modules/ipvs.modules
[root@master ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for mod in $(ls $ipvs_mods_dir | grep -o "^[^.]*"); do
/sbin/modinfo -F filename $mod &> /dev/null
if [ $? -eq 0 ]; then
/sbin/modprobe $mod
fi
done
[root@master ~]#
修改文件權限,並手動爲當前系統加載內核模塊
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# bash /etc/sysconfig/modules/ipvs.modules
7,各節點安裝docker-ce
在此之前wget並沒有用,所以我們需要下載一個wget
[root@master ~]# yum install -y wget
獲取docker-ce的配置倉庫配置文件
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
將docker-ce.repo複製到其他節點
[root@master ~]# scp docker-ce.repo node01:/etc/yum.repos.d/
[root@master ~]# scp docker-ce.repo node02:/etc/yum.repos.d/
[root@master ~]# scp docker-ce.repo node03:/etc/yum.repos.d/
然後分別在其他節點執行
[root@node01 ~]# yum install -y docker-ce
8,啓動docker服務
配置/usr/lib/systemd/system/docker.service ,加入
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
Environment="NO_PROXY=127.0.0.0/8,192.168.200.0/24"
注:192.168.200.0/24爲我的ip,根據個人ip設置
查看bridge
[root@master ~]# sysctl -a | grep bridge
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-pass-vlan-input-dev = 0
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.cni0.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.ens33.stable_secret"
sysctl: reading key "net.ipv6.conf.flannel/1.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
sysctl: reading key "net.ipv6.conf.veth426a064f.stable_secret"
sysctl: reading key "net.ipv6.conf.vethb7175a9c.stable_secret"
若前三都爲1則不要配置,否者編輯/etc/sysctl.d/k8s.conf
[root@master ~]# vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
並將此文件複製到其他節點
[root@master ~]# scp /etc/sysctl.d/k8s.conf node01:/etc/sysctl.d/k8s.conf
[root@master ~]# scp /etc/sysctl.d/k8s.conf node02:/etc/sysctl.d/k8s.conf
[root@master ~]# scp /etc/sysctl.d/k8s.conf node03:/etc/sysctl.d/k8s.conf
啓動daemon和docker
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start docker
將/usr/lib/systemd/system/docker.service複製到其他節點
[root@master ~]# scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
[root@master ~]# scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
[root@master ~]# scp /usr/lib/systemd/system/docker.service node03:/usr/lib/systemd/system/docker.service
在各節點安裝docker-ce
[root@node01 ~]# yum install -y docker-ce
[root@node02 ~]# yum install -y docker-ce
[root@node03 ~]# yum install -y docker-ce
tip:安裝後都重啓daemon和docker,最後查看docker info
各節點設置開機自動啓
[root@master ~]# systemctl enable docker
[root@node01 ~]# systemctl enable docker
[root@node02 ~]# systemctl enable docker
[root@node03 ~]# systemctl enable docker
配置阿里雲加速,創建/etc/docker/daemon.json文件
[root@master ~]# vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://lptjipx8.mirror.aliyuncs.com"]
}
tip:若沒有申請可以用我的
9,安裝kunernetes
找到kubernetes的下載鏈接,centos7,64位
網站鏈接: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
通過鏈接手動生成kubernetes的yum倉庫配置文件/etc/yum.repos.d/kubernetes.repo
[root@master ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes Repository
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
將此文件複製到其他節點
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/kubernetes.repo
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/kubernetes.repo
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node03:/etc/yum.repos.d/kubernetes.repo
tip:這裏的gpgkey的鏈接地址如圖(複製鏈接)
查看kubernetes
[root@master ~]# yum list all | grep "^kube"
下載kubernetes
[root@master ~]# yum install kubeadm kubelet kubectl
查看版本
/usr/bin/kubectl
[root@master ~]# rpm -q kubectl
kubectl-1.15.3-0.x86_64
[root@master ~]#
10,初始master節點
若未禁用Swap設備,則需要編輯/etc/sysconfig/kubelet
[root@master ~]# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
初始化集羣,在初始化集羣前需要7個鏡像
k8s.gcr.io/kube-scheduler v1.15.3 6ef91efad3d9 13 days ago 81.1MB
k8s.gcr.io/kube-proxy v1.15.3 aaaae9089f19 13 days ago 82.4MB
k8s.gcr.io/kube-controller-manager v1.15.3 766b3b091b23 13 days ago 159MB
k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 4 weeks ago 207MB
k8s.gcr.io/coredns 1.3.1 a773837be6c4 5 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
在此我們需要先行下載,通過阿里雲拉取這7個鏡像(後期補上教程),再初始化
[root@master ~]# kubeadm init --kubernetes-version="v1.15.3" --pod-network-cidr="10.244.0.0/24" --dry-run --ignore-preflight-errors=Swap
在這會出現以下內容成功:
保存好最後一行kubeadm join用於node節點的加入
創建隱藏目錄.kube/
[root@master ~]# mkdir .kube/
[root@master ~]# cp /etc/kubernetes/admin.conf .kube/config
11,部署flannel網絡
複製
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
此時狀態爲ready:
每臺node節點需要在阿里雲拉取三個鏡像:
[root@node01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 aaaae9089f19 13 days ago 82.4MB
quay.io/coreos/flannel v0.11.0-amd64 6f4360a7f3bb 5 months ago 52.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 21 months ago 742kB
再使用kubeadm join加入每臺node:
[root@master ~]# kubeadm join 192.168.200.129:6443 --token 6fwamt.sn9qrs19tlhzl9cu \
> --discovery-token-ca-cert-hash sha256:85febffe07dc77cc4c1dda7f11a888e85b7d9e3f64deafb9d968adefefafbfc2 --ignore-preflight-errors=Swap
等待一會,查詢nodes:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 8h v1.15.3
node01 Ready <none> 6h2m v1.15.3
node02 Ready <none> 5h59m v1.15.3
node03 Ready <none> 5h59m v1.15.3
12,配置/etc/resolv.conf
[root@master ~]# cat /etc/resolv.conf
nameserver 8.8.8.8
nameserver 192.168.200.2