linux--centos搭建k8s集羣

1、環境信息

1)新安裝三臺cenos虛擬機,保證三臺虛擬機可以互相ping通

master:192.168.32.100

node1:192.168.32.110

node2:192.168.32.120

2)關閉防火牆

[lxw@localhost yum.repos.d]$ sudo systemctl stop iptables
[sudo] lxw 的密碼:
Failed to stop iptables.service: Unit iptables.service not loaded.
[lxw@localhost yum.repos.d]$

3)基礎服務安裝

[lxw@localhost yum.repos.d]$ sudo yum -y install net-tools wget vim ntp

4)分別在三臺主機上設置主機名

[lxw@localhost /]$ sudo hostnamectl --static set-hostname k8s_master
[sudo] lxw 的密碼:
[lxw@localhost /]$ uname -a
Linux k8s_master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[lxw@localhost root]$ sudo hostnamectl --static set-hostname k8s_node1
[lxw@localhost root]$ uname -a
Linux k8s_node1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[lxw@localhost root]$

[lxw@localhost root]$ sudo hostnamectl --static set-hostname k8s_node2
[lxw@localhost root]$ uname -a
Linux k8s_node2 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[lxw@localhost root]$

5)設置hosts

[root@k8s_master ~]# cat <<EOF > /etc/hosts
> 192.168.32.100 k8s_master
> 192.168.32.110 k8s_node1
> 192.168.32.120 k8s_node2
> EOF

 

2、各個節點安裝docker

注:可以不安裝,安裝k8s的時候會安裝

1)更新源

2)卸載舊版本

#移除舊版本
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine
# 刪除所有舊的數據
sudo rm -rf /var/lib/docker

3)安裝需要的軟件包

#  安裝依賴包
sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

4)設置yum源

#刪掉舊源
sudo rm -rf /etc/yum.repo.d/docker-ce.repo

# 添加源,使用了阿里雲鏡像
sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

5)安裝

# 配置緩存
sudo yum makecache fast

# 安裝最新穩定版本的docker
sudo yum install -y docker-ce

6)啓動並加入開機啓動 

# 啓動docker引擎並設置開機啓動
sudo systemctl start docker
sudo systemctl enable docker

7)測試

[lxw@localhost yum.repos.d]$ sudo docker search centos
NAME                               DESCRIPTION                                                                                                                                       STARS               OFFICIAL            AUTOMATED
centos                             The official build of CentOS.                                                                                                                     5917                [OK]
ansible/centos7-ansible            Ansible on Centos7                                                                                                                                128                                     [OK]
jdeathe/centos-ssh                 OpenSSH / Supervisor / EPEL/IUS/SCL     

8)創建docker組

[lxw@localhost yum.repos.d]$ sudo groupadd docker
[sudo] lxw 的密碼:
groupadd:“docker”組已存在
[lxw@localhost yum.repos.d]$ sudo usermod -aG docker lxw
[lxw@localhost yum.repos.d]$ docker run hello-world

 

 參考鏈接:https://www.jianshu.com/p/e6b946c79542

3、master節點配置

etcd安裝

1)安裝etcd

[lxw@k8s_master root]$ sudo yum -y install etcd

2)編輯配置文件

[root@k8s_master ~]# cat /etc/etcd/etcd.conf | grep -v "^#"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_NAME="master"
ETCD_ADVERTISE_CLIENT_URLS="http://k8s_master:2379,http://k8s_master:4001"

3)開啓etcd

[root@k8s_master ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8s_master ~]# systemctl start etcd

4)驗證etcd

[root@k8s_master ~]# etcdctl -C http://k8s_master:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://k8s_master:2379
cluster is healthy
[root@k8s_master ~]#

kubernetes安裝

1)安裝kubernetes

[root@k8s_master ~]# yum -y install kubernetes
已加載插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.nju.edu.cn

2)修改apiserver服務配置文件:(master節點)

[root@k8s_master ~]# cat /etc/kubernetes/apiserver | grep -v "^#"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

KUBE_API_PORT="--port=8080"


KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.32.100:2379"    #master實際ip

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_API_ARGS=""
[root@k8s_master ~]#

3)修改config配置文件:(master節點)

[root@k8s_master ~]# cat /etc/kubernetes/config  | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://192.168.32.100:8080"     #master的實際Ip
[root@k8s_master ~]#

4)設置開機啓動,開啓服務

[root@k8s_master ~]#systemctl enable kube-apiserver kube-controller-manager kube-scheduler
[root@k8s_master ~]#systemctl start kube-apiserver kube-controller-manager kube-scheduler

5)查看服務端口

[root@k8s_master ~]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         St                                                                                                  ate       PID/Program name
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LI                                                                                                  STEN      29976/etcd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LI                                                                                                  STEN      5758/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LI                                                                                                  STEN      5914/master
tcp6       0      0 :::6443                 :::*                    LI                                                                                                  STEN      30192/kube-apiserve
tcp6       0      0 :::10251                :::*                    LI                                                                                                  STEN      30194/kube-schedule
tcp6       0      0 :::2379                 :::*                    LI                                                                                                  STEN      29976/etcd
tcp6       0      0 :::10252                :::*                    LI                                                                                                  STEN      30193/kube-controll
tcp6       0      0 :::8080                 :::*                    LI                                                                                                  STEN      30192/kube-apiserve
tcp6       0      0 :::22                   :::*                    LI                                                                                                  STEN      5758/sshd
tcp6       0      0 ::1:25                  :::*                    LI                                                                                                  STEN      5914/master
tcp6       0      0 :::4001                 :::*                    LI                                                                                                  STEN      29976/etcd
[root@k8s_master ~]#

4、node節點配置

1)安裝kubernetes

[root@k8s_node2 ~]# yum -y install kubernetes

2)更改配置

[root@k8s_node1 ~]# cat /etc/kubernetes/config | grep -v "^#"
KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_MASTER="--master=http://192.168.32.100:8080"
[root@k8s_node1 ~]#
[root@k8s_master ~]# cat /etc/kubernetes/kubelet | grep -v "^#"

KUBELET_ADDRESS="--address=0.0.0.0"


KUBELET_HOSTNAME="--hostname-override=192.168.32.110"           #node的ip

KUBELET_API_SERVER="--api-servers=http://192.168.32.100:8080"   #master的Ip

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

KUBELET_ARGS=""
[root@k8s_master ~]#

3)設置開機啓動

[root@k8s_node1 ~]# systemctl enable kubelet kube-proxy
[root@k8s_node1 ~]# systemctl start kubelet kube-proxy

4)查看端口 

[root@k8s_node1 ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      6022/sshd
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      6181/master
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      38432/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      38433/kube-proxy
tcp6       0      0 :::10255                :::*                    LISTEN      38432/kubelet
tcp6       0      0 :::22                   :::*                    LISTEN      6022/sshd
tcp6       0      0 ::1:25                  :::*                    LISTEN      6181/master
tcp6       0      0 :::4194                 :::*                    LISTEN      38432/kubelet
tcp6       0      0 :::10250                :::*                    LISTEN      38432/kubelet

5)測試

[root@k8s_master ~]# kubectl get nodes
NAME             STATUS    AGE
192.168.32.110   Ready     11m

如果獲取不到資源,參考https://blog.csdn.net/weixin_37480442/article/details/82111564

5、配置網絡 

1)安裝flannel

[root@k8s_master ~]# yum -y install flannel

2)flannelpeiz配置

[root@k8s_master ~]# cat /etc/sysconfig/flanneld | grep -v "^#"

FLANNEL_ETCD_ENDPOINTS="http://192.168.32.100:2379"

FLANNEL_ETCD_PREFIX="/atomic.io/network"


[root@k8s_master ~]#

3)配置網絡

[root@k8s_node1 ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}

4)開機啓動項

[root@k8s_master ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s_master ~]# systemctl start flanneld

6、重啓服務

#master
for
SERVICES in docker kube-apiserver kube-controller-manager kube-scheduler; 
do systemctl restart $SERVICES ; 
done

#node
for 
SERVICES in kube-proxy kubelet docker flanneld;
do systemctl restart $SERVICES;
systemctl enable $SERVICES;
systemctl status $SERVICES;
done

 

參考:https://blog.csdn.net/qq_38252499/article/details/99214276

 https://www.jianshu.com/p/345e3fb797db

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章