CentOS7部署Kubernetes集羣

CentOS7部署Kubernetes集羣


簡介

Kubernetes是什麼?

  Kubernetes一個用於容器集羣的自動化部署、擴容以及運維的開源平臺。


  通過Kubernetes,你可以快速有效地響應用戶需求:


    a、快速而有預期地部署你的應用

    b、極速地擴展你的應用

    c、無縫對接新的應用功能

    d、節省資源,優化硬件資源的使用

    

  我們希望培育出一個組件及工具的生態,幫助大家減輕在公有云及私有云上運行應用的負擔。


Kubernetes特點:


    a、可移植: 支持公有云,私有云,混合雲,多重雲(multi-cloud)

    b、可擴展: 模塊化, 插件化, 可掛載, 可組合

    c、自愈: 自動佈置,自動重啓,自動複製,自動擴展

  

Kubernetes始於Google 2014 年的一個項目。 Kubernetes的構建基於Google十多年運行大規模負載產品的經驗,同時也吸取了社區中最好的意見和經驗。


Kubernetes設計架構:

architecture.png

高清圖地址:https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.2/docs/design/architecture.png



Kubernetes主要由以下幾個核心組件組成:


  a、etcd保存了整個集羣的狀態;

  b、apiserver提供了資源操作的唯一入口,並提供認證、授權、訪問控制、API註冊和發現等機制;

  c、controller manager負責維護集羣的狀態,比如故障檢測、自動擴展、滾動更新等;

  d、scheduler負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上;

  e、kubelet負責維護容器的生命週期,同時也負責Volume(CVI)和網絡(CNI)的管理;

  d、Container runtime負責鏡像管理以及Pod和容器的真正運行(CRI);

  e、kube-proxy負責爲Service提供cluster內部的服務發現和負載均衡;

  

除了核心組件,還有一些推薦的Add-ons:

  f、kube-dns負責爲整個集羣提供DNS服務

  g、Ingress Controller爲服務提供外網入口

  h、Heapster提供資源監控

  i、Dashboard提供GUI

  j、Federation提供跨可用區的集羣

  k、Fluentd-elasticsearch提供集羣日誌採集、存儲與查詢


  ****** 具體參考:https://www.kubernetes.org.cn/docs


一、環境介紹


  Kubernetes包提供了一些服務:kube-apiserver, kube-scheduler, kube-controller-manager,kubelet, 


  kube-proxy。這些服務通過systemd進行管理,配置信息都集中存放在一個地方:/etc/kubernetes。我們將會把這些服務運行到不同的主機上。第一臺主機,centosmaster,將是Kubernetes 集羣的master主機。這臺機器上將運行kube-apiserver, kubecontroller-manager和kube-scheduler這幾個服務,此外,master主機上還將運行etcd。其餘的主機,fed-minion,將是從節點,將會運行kubelet, proxy和docker

  

  操作系統信息:CentOS 7 64位

  Open vSwitch版本信息:2.5.0

  Kubernetes版本信息:v1.5.2

  Etcd版本信息:3.1.9

  Docker版本信息:1.12.6

  服務器信息:

    192.168.80.130  k8s-master

    192.168.80.131  k8s-node1

    192.168.80.132  k8s-node2



二、部署前準備

  1、設置免密登錄

  [Master]

    [root@k8s-master ~]# ssh-keygen

    [root@k8s-master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node1

    [root@k8s-master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node2


  2、所有機器上操作

    

    a、添加hosts

    [root@k8s-master ~]# vim /etc/hosts

        192.168.80.130  k8s-master

        192.168.80.131  k8s-node1

        192.168.80.132  k8s-node2

    

    b、同步時間

    [root@k8s-master ~]# yum -y install lrzsz git wget python-devel ntp net-tools curl cmake epel-release rpmdevtools openssl-devel kernel-devel gcc redhat-rpm-config bridge-utils

    [root@k8s-master ~]# yum groupinstall "Development Tools" -y

    [root@k8s-master ~]# \cp -Rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

    [root@k8s-master ~]# ntpdate 133.100.11.8

    [root@k8s-master ~]# sed -i 's#ZONE="America/New_York"#ZONE="Asia/Shanghai"#g' /etc/sysconfig/clock

    [root@k8s-master ~]# hwclock -w

    [root@k8s-master ~]# date -R


  3、在2個Node節點安裝Open Switch,這裏以node1爲例安裝

    

    a、安裝openVswitch

    [root@node1 ~]# yum -y install lrzsz git wget python-devel ntp net-tools curl cmake epel-release rpmdevtools openssl-devel kernel-devel gcc redhat-rpm-config bridge-utils

    

    [root@node1 ~]# yum groupinstall "Development Tools" -y  

  

  

    [root@node1 ~]# mkdir -p ~/rpmbuild/SOURCES  

    [root@node1 ~]# wget http://openvswitch.org/releases/openvswitch-2.5.0.tar.gz  

    [root@node1 ~]# cp openvswitch-2.5.0.tar.gz ~/rpmbuild/SOURCES/  

    [root@node1 ~]# tar xfz openvswitch-2.5.0.tar.gz  

    [root@node1 ~]# sed 's/openvswitch-kmod, //g' openvswitch-2.5.0/rhel/openvswitch.spec > openvswitch-2.5.0/rhel/openvswitch_no_kmod.spec


    [root@node1 ~]# rpmbuild -bb --nocheck ~/openvswitch-2.5.0/rhel/openvswitch_no_kmod.spec  

    [root@node1 ~]# yum -y localinstall ~/rpmbuild/RPMS/x86_64/openvswitch-2.5.0-1.x86_64.rpm  

    

    [root@node1 ~]# modprobe openvswitch && systemctl start openvswitch.service


    b、配置GRE遂道

    [Node1]

    [root@node1 ~]# ovs-vsctl add-br obr0

    

    ****** 接下來建立gre,並將新建的gre0添加到obr0,在node1上執行如下命令,


    [root@node1 ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.80.132

  

    ****** 注:remote_ip=node2_IP


    [Node2]

    [root@node2 ~]# ovs-vsctl add-br obr0


    ****** 接下來建立gre,並將新建的gre0添加到obr0,在node1上執行如下命令,


    [root@node2 ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.80.131


    ****** 注:remote_ip=node1_IP


    ****** 至此,node1和node2之間的隧道已經建立。然後我們在node1和node2上創建網橋br0替代Docker默認的docker0,設置node1的br0的地址:172.16.1.1/24, node2的br0的地址:172.16.2.1/24,並添加obr0到br0接口,以下命令均在node1和 node2上執行,

    這裏以node1爲例執行:

    [root@node1 ~]# brctl addbr br0               //創建linux bridge

    [root@node1 ~]# brctl addif br0 obr0          //添加obr0爲br0的接口

    [root@node1 ~]# ip link set dev docker0 down   //設置docker0爲down狀態

    [root@node1 ~]# ip link del dev docker0        //刪除docker0


    ****** 爲了使用br0在重啓後也生效,我們需要在/etc/sysconfig/network-scripts下創建網卡文件ifcfg-br0

    [root@node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0

ONBOOT=yes

BOOTPROTO=static

IPADDR=172.16.1.1

NETMASK=255.255.255.0

GATEWAY=172.16.1.0

USERCTL=no

TYPE=Bridge

IPV6INIT=no


    ******** Node2上也需要執行上面命令 *******

    

    c、兩臺node互添路由信息:

    [node1]

    [root@node1 ~]# cd /etc/sysconfig/network-scripts/

    [root@node1 ~]# ls ./

    [root@node1 ~]# ifcfg-br0   ifcfg-ens33   ifcfg-lo 

    [root@node1 ~]# vim route-ens33

172.16.2.0/24 via 192.168.80.132 dev ens33


******注:ens33是node1的物理網卡名稱,如果你的是eth0,那麼名稱爲:route-eth0


    [root@node1 ~]# service network restart


    [node2]

    [root@node2 ~]# /etc/sysconfig/network-scripts/

    [root@node2 ~]# vim route-ens33

172.16.1.0/24 via 192.168.80.131 dev ens33


    [root@node2 ~]# service network restart


    d、測試gre遂道是否連通

    [root@k8s-node1 ~]# ping -w 4 172.16.2.1

PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data.

64 bytes from 172.16.2.1: icmp_seq=1 ttl=64 time=0.652 ms

64 bytes from 172.16.2.1: icmp_seq=2 ttl=64 time=0.281 ms

64 bytes from 172.16.2.1: icmp_seq=3 ttl=64 time=0.374 ms

64 bytes from 172.16.2.1: icmp_seq=4 ttl=64 time=0.187 ms


--- 172.16.2.1 ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3002ms

rtt min/avg/max/mdev = 0.187/0.373/0.652/0.174 ms



三、部署Kubernetes


  1、在Master機器上安裝

    [master]

    [root@k8s-master ~]# yum -y install etcd kubernetes 


  2、配置Etcd

    Etcd默認的監聽端口是4001,在這裏修改以下信息


    a、配置etcd.conf

    [root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf_bak

    [root@k8s-master ~]# vim /etc/etcd/etcd.conf

# [member]

ETCD_NAME="etcd-master"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"


#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://k8s-master:2380"

ETCD_INITIAL_CLUSTER="etcd-master=http://k8s-master:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://k8s-master:2379,http://k8s-master:4001"


    b、配置etcd.service

    [root@k8s-master ~]# cp /usr/lib/systemd/system/etcd.service /usr/lib/systemd/system/etcd.service_bak

    [root@k8s-master ~]# vim /usr/lib/systemd/system/etcd.service

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""


    ****** 注:只修改[service]裏的ExecStart


    [root@k8s-master ~]# mkdir -p /export/etcd

    [root@k8s-master ~]# chown -R etcd:etcd /export/etcd


    c、啓動etcd服務

    [root@k8s-master ~]# systemctl daemon-reload

    [root@k8s-master ~]# systemctl enable etcd.service

    [root@k8s-master ~]# systemctl start etcd.service


    d、驗證是否成功

    [root@k8s-master ~]# :q

ffe21a7812eb7c5f: name=etcd-master peerURLs=http://k8s-master:2380 clientURLs=http://k8s-master:2379,http://k8s-master:4001 isLeader=true


  3、配置Kubernetes


    a、apiserver配置

    [root@k8s-master ~]# cd /etc/kubernetes/

    [root@k8s-master ~]# cp apiserver apiserver_bak

    [root@k8s-master ~]# vim /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#


# The address on the local server to listen to.

#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"


# The port on the local server to listen on.

# KUBE_API_PORT="--port=8080"

KUBE_API_PORT="--port=8080"


# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

KUBELET_PORT="--kubelet-port=10250"


# Comma separated list of nodes in the etcd cluster

#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:2379"


# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"


# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"


# Add your own!

KUBE_API_ARGS=""

 

    b、配置config

    [root@k8s-master ~]# cp config config_bak

    [root@k8s-master ~]# vim /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"


# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"


# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"


# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://k8s-master:8080"


#******** add etcd server info ********#


# Etcd Server Configure

KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:4001"

 


  4、啓動服務

  [root@k8s-master ~]# 

  for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

    systemctl restart $SERVICES

    systemctl enable $SERVICES

    systemctl status $SERVICES 

  done



  5、Node機器只需要Kubernetes

    [所有的node節點]

    

    ****** 這裏以node1爲例:

    [root@node1 ~]# yum -y install kubernetes 


    ****** 安裝k8s會自動安裝docker

 

  6、配置Node節點的Kubernetes


    [root@node1 ~]# cd /etc/kubernetes

    [root@node1 ~]# cp kubelet kubelet_bak

    [root@node1 ~]# vim /etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config


# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

#KUBELET_ADDRESS="--address=127.0.0.1"

KUBELET_ADDRESS="--address=0.0.0.0"


# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

KUBELET_PORT="--port=10250"


# You may leave this blank to use the actual hostname

#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

KUBELET_HOSTNAME="--hostname-override=k8s-node"


# location of the api-server

#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"


# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"


# Add your own!

KUBELET_ARGS=""

 


  7、啓動Node節點Kubernetes服務

    [root@node1 ~]# 

for SERVICES in kube-proxy kubelet docker; do

    systemctl restart $SERVICES

    systemctl enable $SERVICES

    systemctl status $SERVICES 

done



  8、測試

  [master]

  

    a、查看node節點:

    [root@k8s-master ~]# kubectl get nodes

NAME        STATUS    AGE

k8s-node1   Ready     1h

k8s-node2   Ready     1h


    b、創建 nginx Pod:

    [root@k8s-master ~]# mkdir /export/kube_containers

    [root@k8s-master ~]# cd /export/kube_containers

    [root@k8s-master ~]# vim nginx.yaml

apiVersion: v1

kind: Pod

metadata:

  name: nginx

  labels:  

    name: nginx

spec:

  containers:

    - resources:

        limits:

          cpu: 1

      image: nginx

      name: nginx

      ports:

        - containerPort: 80

          name: nginx


    c、創建 Mysql Pod資源文件

    [root@k8s-master ~]# vim mysql.yaml

apiVersion: v1

kind: Pod

metadata:

  name: mysql

  labels: 

    name: mysql

spec: 

  containers: 

    - resources:

        limits :

          cpu: 0.5

      image: mysql

      name: mysql

      env:

        - name: MYSQL_ROOT_PASSWORD

          value: rootpwd

      ports: 

        - containerPort: 3306

          name: mysql

      volumeMounts:

          # name must match the volume name below

        - name: mysql-persistent-storage

          # mount path within the container

          mountPath: /var/lib/mysql

  volumes:

    - name: mysql-persistent-storage

      cinder:

        volumeID: bd82f7e2-wece-4c01-a505-4acf60b07f4a

        fsType: ext4


    d、導入資源

    [root@k8s-master ~]# kubectl create -f mysql.yaml


    e、查看資源狀態

    [root@k8s-master ~]# kubectl get po -o wide

NAME                     READY     STATUS              RESTARTS   AGE       IP        NODE

mysql                    0/1       ContainerCreating   0          5m        <none>    k8s-node2

nginx-controller-fnttl   0/1       ContainerCreating   0          5m        <none>    k8s-node2

nginx-controller-kb4hj   0/1       ContainerCreating   0          5m        <none>    k8s-node1


    ****** 這裏的STATUS的狀態是:ContainerCreating,因爲在此時node節點在下載images,稍等片刻就可以,如果不放心可以使用nmon監控下流量。


    ****** 再次查看:

    [root@k8s-master ~]# kubectl get po -o wide

NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE

mysql                    1/1       Running   0          19m       172.17.0.3   k8s-node2

nginx-controller-fnttl   1/1       Running   0          19m       172.17.0.2   k8s-node2

nginx-controller-kb4hj   1/1       Running   0          19m       172.17.0.2   k8s-node1


    ×××××× 這裏已經部署在運行了,所以是Running。Status開始是Ready


 


  9、查看日誌

    以Master機器日誌爲例:

    [root@k8s-master ~]# tail -f /var/log/messages | grep kube

Dec 11 09:54:11 192 kube-scheduler: I1211 09:54:11.380994   20445 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mysql", UID:"2f192467-a030-11e5-8a55-000c298cfaa1", APIVersion:"v1", ResourceVersion:"3522", FieldPath:""}): reason: 'scheduled' Successfully assigned mysql to dslave

 


四、常見錯誤及解決方案


  1、[錯誤1]

    [root@k8s-master ~]# kubectl create -f mysql.yaml

Error from server (ServerTimeout): error when creating "mysql.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account


    [解決方案]

    [root@k8s-master ~]# vim /etc/kubernets/apiserver

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

修改爲:

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"


  [重啓服務]

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

    systemctl restart $SERVICES

    systemctl enable $SERVICES

    systemctl status $SERVICES 

done



  2、[錯誤2]

  在部署Pod時,在Node機器日誌中報錯

Dec 11 09:30:22 dslave kubelet: E1211 09:30:22.745867   99650 manager.go:1557] Failed to create pod infra container: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (Network timed out while trying to connect to http://gcr.io/v1/repositories/google_containers/pause/images. You may want to check your internet connection or if you are behind a proxy.); Skipping pod "mysql_default"

Dec 11 09:30:22 dslave kubelet: E1211 09:30:22.955470   99650 pod_workers.go:111] Error syncing pod bcbb3b8a-a02a-11e5-8a55-000c298cfaa1, skipping: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (Network timed out while trying to connect to http://gcr.io/v1/repositories/google_containers/pause/images. You may want to check your internet connection or if you are behind a proxy.)


  [解決方案]

  原因:Google被牆了,下載資源包到本地

http://www.sunmite.com/linux/installing-kubernetes-cluster-on-centos7-to-manage-pods-and-services/attachment/pause-0-8-0/


在Node節點導入

docker load --input pause-0.8.0.tar


至此,環境已經全部搭建完畢,如有問題請聯繫:[email protected]

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章