CentOS7部署Kubernetes集群

CentOS7部署Kubernetes集群


简介

Kubernetes是什么?

  Kubernetes一个用于容器集群的自动化部署、扩容以及运维的开源平台。


  通过Kubernetes,你可以快速有效地响应用户需求:


    a、快速而有预期地部署你的应用

    b、极速地扩展你的应用

    c、无缝对接新的应用功能

    d、节省资源,优化硬件资源的使用

    

  我们希望培育出一个组件及工具的生态,帮助大家减轻在公有云及私有云上运行应用的负担。


Kubernetes特点:


    a、可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)

    b、可扩展: 模块化, 插件化, 可挂载, 可组合

    c、自愈: 自动布置,自动重启,自动复制,自动扩展

  

Kubernetes始于Google 2014 年的一个项目。 Kubernetes的构建基于Google十多年运行大规模负载产品的经验,同时也吸取了社区中最好的意见和经验。


Kubernetes设计架构:

architecture.png

高清图地址:https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.2/docs/design/architecture.png



Kubernetes主要由以下几个核心组件组成:


  a、etcd保存了整个集群的状态;

  b、apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;

  c、controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;

  d、scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;

  e、kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;

  d、Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);

  e、kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

  

除了核心组件,还有一些推荐的Add-ons:

  f、kube-dns负责为整个集群提供DNS服务

  g、Ingress Controller为服务提供外网入口

  h、Heapster提供资源监控

  i、Dashboard提供GUI

  j、Federation提供跨可用区的集群

  k、Fluentd-elasticsearch提供集群日志采集、存储与查询


  ****** 具体参考:https://www.kubernetes.org.cn/docs


一、环境介绍


  Kubernetes包提供了一些服务:kube-apiserver, kube-scheduler, kube-controller-manager,kubelet, 


  kube-proxy。这些服务通过systemd进行管理,配置信息都集中存放在一个地方:/etc/kubernetes。我们将会把这些服务运行到不同的主机上。第一台主机,centosmaster,将是Kubernetes 集群的master主机。这台机器上将运行kube-apiserver, kubecontroller-manager和kube-scheduler这几个服务,此外,master主机上还将运行etcd。其余的主机,fed-minion,将是从节点,将会运行kubelet, proxy和docker

  

  操作系统信息:CentOS 7 64位

  Open vSwitch版本信息:2.5.0

  Kubernetes版本信息:v1.5.2

  Etcd版本信息:3.1.9

  Docker版本信息:1.12.6

  服务器信息:

    192.168.80.130  k8s-master

    192.168.80.131  k8s-node1

    192.168.80.132  k8s-node2



二、部署前准备

  1、设置免密登录

  [Master]

    [root@k8s-master ~]# ssh-keygen

    [root@k8s-master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node1

    [root@k8s-master ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub k8s-node2


  2、所有机器上操作

    

    a、添加hosts

    [root@k8s-master ~]# vim /etc/hosts

        192.168.80.130  k8s-master

        192.168.80.131  k8s-node1

        192.168.80.132  k8s-node2

    

    b、同步时间

    [root@k8s-master ~]# yum -y install lrzsz git wget python-devel ntp net-tools curl cmake epel-release rpmdevtools openssl-devel kernel-devel gcc redhat-rpm-config bridge-utils

    [root@k8s-master ~]# yum groupinstall "Development Tools" -y

    [root@k8s-master ~]# \cp -Rf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

    [root@k8s-master ~]# ntpdate 133.100.11.8

    [root@k8s-master ~]# sed -i 's#ZONE="America/New_York"#ZONE="Asia/Shanghai"#g' /etc/sysconfig/clock

    [root@k8s-master ~]# hwclock -w

    [root@k8s-master ~]# date -R


  3、在2个Node节点安装Open Switch,这里以node1为例安装

    

    a、安装openVswitch

    [root@node1 ~]# yum -y install lrzsz git wget python-devel ntp net-tools curl cmake epel-release rpmdevtools openssl-devel kernel-devel gcc redhat-rpm-config bridge-utils

    

    [root@node1 ~]# yum groupinstall "Development Tools" -y  

  

  

    [root@node1 ~]# mkdir -p ~/rpmbuild/SOURCES  

    [root@node1 ~]# wget http://openvswitch.org/releases/openvswitch-2.5.0.tar.gz  

    [root@node1 ~]# cp openvswitch-2.5.0.tar.gz ~/rpmbuild/SOURCES/  

    [root@node1 ~]# tar xfz openvswitch-2.5.0.tar.gz  

    [root@node1 ~]# sed 's/openvswitch-kmod, //g' openvswitch-2.5.0/rhel/openvswitch.spec > openvswitch-2.5.0/rhel/openvswitch_no_kmod.spec


    [root@node1 ~]# rpmbuild -bb --nocheck ~/openvswitch-2.5.0/rhel/openvswitch_no_kmod.spec  

    [root@node1 ~]# yum -y localinstall ~/rpmbuild/RPMS/x86_64/openvswitch-2.5.0-1.x86_64.rpm  

    

    [root@node1 ~]# modprobe openvswitch && systemctl start openvswitch.service


    b、配置GRE遂道

    [Node1]

    [root@node1 ~]# ovs-vsctl add-br obr0

    

    ****** 接下来建立gre,并将新建的gre0添加到obr0,在node1上执行如下命令,


    [root@node1 ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.80.132

  

    ****** 注:remote_ip=node2_IP


    [Node2]

    [root@node2 ~]# ovs-vsctl add-br obr0


    ****** 接下来建立gre,并将新建的gre0添加到obr0,在node1上执行如下命令,


    [root@node2 ~]# ovs-vsctl add-port obr0 gre0 -- set Interface gre0 type=gre options:remote_ip=192.168.80.131


    ****** 注:remote_ip=node1_IP


    ****** 至此,node1和node2之间的隧道已经建立。然后我们在node1和node2上创建网桥br0替代Docker默认的docker0,设置node1的br0的地址:172.16.1.1/24, node2的br0的地址:172.16.2.1/24,并添加obr0到br0接口,以下命令均在node1和 node2上执行,

    这里以node1为例执行:

    [root@node1 ~]# brctl addbr br0               //创建linux bridge

    [root@node1 ~]# brctl addif br0 obr0          //添加obr0为br0的接口

    [root@node1 ~]# ip link set dev docker0 down   //设置docker0为down状态

    [root@node1 ~]# ip link del dev docker0        //删除docker0


    ****** 为了使用br0在重启后也生效,我们需要在/etc/sysconfig/network-scripts下创建网卡文件ifcfg-br0

    [root@node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-br0

DEVICE=br0

ONBOOT=yes

BOOTPROTO=static

IPADDR=172.16.1.1

NETMASK=255.255.255.0

GATEWAY=172.16.1.0

USERCTL=no

TYPE=Bridge

IPV6INIT=no


    ******** Node2上也需要执行上面命令 *******

    

    c、两台node互添路由信息:

    [node1]

    [root@node1 ~]# cd /etc/sysconfig/network-scripts/

    [root@node1 ~]# ls ./

    [root@node1 ~]# ifcfg-br0   ifcfg-ens33   ifcfg-lo 

    [root@node1 ~]# vim route-ens33

172.16.2.0/24 via 192.168.80.132 dev ens33


******注:ens33是node1的物理网卡名称,如果你的是eth0,那么名称为:route-eth0


    [root@node1 ~]# service network restart


    [node2]

    [root@node2 ~]# /etc/sysconfig/network-scripts/

    [root@node2 ~]# vim route-ens33

172.16.1.0/24 via 192.168.80.131 dev ens33


    [root@node2 ~]# service network restart


    d、测试gre遂道是否连通

    [root@k8s-node1 ~]# ping -w 4 172.16.2.1

PING 172.16.2.1 (172.16.2.1) 56(84) bytes of data.

64 bytes from 172.16.2.1: icmp_seq=1 ttl=64 time=0.652 ms

64 bytes from 172.16.2.1: icmp_seq=2 ttl=64 time=0.281 ms

64 bytes from 172.16.2.1: icmp_seq=3 ttl=64 time=0.374 ms

64 bytes from 172.16.2.1: icmp_seq=4 ttl=64 time=0.187 ms


--- 172.16.2.1 ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3002ms

rtt min/avg/max/mdev = 0.187/0.373/0.652/0.174 ms



三、部署Kubernetes


  1、在Master机器上安装

    [master]

    [root@k8s-master ~]# yum -y install etcd kubernetes 


  2、配置Etcd

    Etcd默认的监听端口是4001,在这里修改以下信息


    a、配置etcd.conf

    [root@k8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf_bak

    [root@k8s-master ~]# vim /etc/etcd/etcd.conf

# [member]

ETCD_NAME="etcd-master"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"


#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://k8s-master:2380"

ETCD_INITIAL_CLUSTER="etcd-master=http://k8s-master:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://k8s-master:2379,http://k8s-master:4001"


    b、配置etcd.service

    [root@k8s-master ~]# cp /usr/lib/systemd/system/etcd.service /usr/lib/systemd/system/etcd.service_bak

    [root@k8s-master ~]# vim /usr/lib/systemd/system/etcd.service

ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""


    ****** 注:只修改[service]里的ExecStart


    [root@k8s-master ~]# mkdir -p /export/etcd

    [root@k8s-master ~]# chown -R etcd:etcd /export/etcd


    c、启动etcd服务

    [root@k8s-master ~]# systemctl daemon-reload

    [root@k8s-master ~]# systemctl enable etcd.service

    [root@k8s-master ~]# systemctl start etcd.service


    d、验证是否成功

    [root@k8s-master ~]# :q

ffe21a7812eb7c5f: name=etcd-master peerURLs=http://k8s-master:2380 clientURLs=http://k8s-master:2379,http://k8s-master:4001 isLeader=true


  3、配置Kubernetes


    a、apiserver配置

    [root@k8s-master ~]# cd /etc/kubernetes/

    [root@k8s-master ~]# cp apiserver apiserver_bak

    [root@k8s-master ~]# vim /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#


# The address on the local server to listen to.

#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"


# The port on the local server to listen on.

# KUBE_API_PORT="--port=8080"

KUBE_API_PORT="--port=8080"


# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

KUBELET_PORT="--kubelet-port=10250"


# Comma separated list of nodes in the etcd cluster

#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:2379"


# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"


# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"


# Add your own!

KUBE_API_ARGS=""

 

    b、配置config

    [root@k8s-master ~]# cp config config_bak

    [root@k8s-master ~]# vim /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"


# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"


# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"


# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://k8s-master:8080"


#******** add etcd server info ********#


# Etcd Server Configure

KUBE_ETCD_SERVERS="--etcd-servers=http://k8s-master:4001"

 


  4、启动服务

  [root@k8s-master ~]# 

  for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

    systemctl restart $SERVICES

    systemctl enable $SERVICES

    systemctl status $SERVICES 

  done



  5、Node机器只需要Kubernetes

    [所有的node节点]

    

    ****** 这里以node1为例:

    [root@node1 ~]# yum -y install kubernetes 


    ****** 安装k8s会自动安装docker

 

  6、配置Node节点的Kubernetes


    [root@node1 ~]# cd /etc/kubernetes

    [root@node1 ~]# cp kubelet kubelet_bak

    [root@node1 ~]# vim /etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config


# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

#KUBELET_ADDRESS="--address=127.0.0.1"

KUBELET_ADDRESS="--address=0.0.0.0"


# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

KUBELET_PORT="--port=10250"


# You may leave this blank to use the actual hostname

#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"

KUBELET_HOSTNAME="--hostname-override=k8s-node"


# location of the api-server

#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"

KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"


# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"


# Add your own!

KUBELET_ARGS=""

 


  7、启动Node节点Kubernetes服务

    [root@node1 ~]# 

for SERVICES in kube-proxy kubelet docker; do

    systemctl restart $SERVICES

    systemctl enable $SERVICES

    systemctl status $SERVICES 

done



  8、测试

  [master]

  

    a、查看node节点:

    [root@k8s-master ~]# kubectl get nodes

NAME        STATUS    AGE

k8s-node1   Ready     1h

k8s-node2   Ready     1h


    b、创建 nginx Pod:

    [root@k8s-master ~]# mkdir /export/kube_containers

    [root@k8s-master ~]# cd /export/kube_containers

    [root@k8s-master ~]# vim nginx.yaml

apiVersion: v1

kind: Pod

metadata:

  name: nginx

  labels:  

    name: nginx

spec:

  containers:

    - resources:

        limits:

          cpu: 1

      image: nginx

      name: nginx

      ports:

        - containerPort: 80

          name: nginx


    c、创建 Mysql Pod资源文件

    [root@k8s-master ~]# vim mysql.yaml

apiVersion: v1

kind: Pod

metadata:

  name: mysql

  labels: 

    name: mysql

spec: 

  containers: 

    - resources:

        limits :

          cpu: 0.5

      image: mysql

      name: mysql

      env:

        - name: MYSQL_ROOT_PASSWORD

          value: rootpwd

      ports: 

        - containerPort: 3306

          name: mysql

      volumeMounts:

          # name must match the volume name below

        - name: mysql-persistent-storage

          # mount path within the container

          mountPath: /var/lib/mysql

  volumes:

    - name: mysql-persistent-storage

      cinder:

        volumeID: bd82f7e2-wece-4c01-a505-4acf60b07f4a

        fsType: ext4


    d、导入资源

    [root@k8s-master ~]# kubectl create -f mysql.yaml


    e、查看资源状态

    [root@k8s-master ~]# kubectl get po -o wide

NAME                     READY     STATUS              RESTARTS   AGE       IP        NODE

mysql                    0/1       ContainerCreating   0          5m        <none>    k8s-node2

nginx-controller-fnttl   0/1       ContainerCreating   0          5m        <none>    k8s-node2

nginx-controller-kb4hj   0/1       ContainerCreating   0          5m        <none>    k8s-node1


    ****** 这里的STATUS的状态是:ContainerCreating,因为在此时node节点在下载images,稍等片刻就可以,如果不放心可以使用nmon监控下流量。


    ****** 再次查看:

    [root@k8s-master ~]# kubectl get po -o wide

NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE

mysql                    1/1       Running   0          19m       172.17.0.3   k8s-node2

nginx-controller-fnttl   1/1       Running   0          19m       172.17.0.2   k8s-node2

nginx-controller-kb4hj   1/1       Running   0          19m       172.17.0.2   k8s-node1


    ×××××× 这里已经部署在运行了,所以是Running。Status开始是Ready


 


  9、查看日志

    以Master机器日志为例:

    [root@k8s-master ~]# tail -f /var/log/messages | grep kube

Dec 11 09:54:11 192 kube-scheduler: I1211 09:54:11.380994   20445 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"mysql", UID:"2f192467-a030-11e5-8a55-000c298cfaa1", APIVersion:"v1", ResourceVersion:"3522", FieldPath:""}): reason: 'scheduled' Successfully assigned mysql to dslave

 


四、常见错误及解决方案


  1、[错误1]

    [root@k8s-master ~]# kubectl create -f mysql.yaml

Error from server (ServerTimeout): error when creating "mysql.yaml": No API token found for service account "default", retry after the token is automatically created and added to the service account


    [解决方案]

    [root@k8s-master ~]# vim /etc/kubernets/apiserver

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

修改为:

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"


  [重启服务]

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do

    systemctl restart $SERVICES

    systemctl enable $SERVICES

    systemctl status $SERVICES 

done



  2、[错误2]

  在部署Pod时,在Node机器日志中报错

Dec 11 09:30:22 dslave kubelet: E1211 09:30:22.745867   99650 manager.go:1557] Failed to create pod infra container: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (Network timed out while trying to connect to http://gcr.io/v1/repositories/google_containers/pause/images. You may want to check your internet connection or if you are behind a proxy.); Skipping pod "mysql_default"

Dec 11 09:30:22 dslave kubelet: E1211 09:30:22.955470   99650 pod_workers.go:111] Error syncing pod bcbb3b8a-a02a-11e5-8a55-000c298cfaa1, skipping: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request.  details: (Network timed out while trying to connect to http://gcr.io/v1/repositories/google_containers/pause/images. You may want to check your internet connection or if you are behind a proxy.)


  [解决方案]

  原因:Google被墙了,下载资源包到本地

http://www.sunmite.com/linux/installing-kubernetes-cluster-on-centos7-to-manage-pods-and-services/attachment/pause-0-8-0/


在Node节点导入

docker load --input pause-0.8.0.tar


至此,环境已经全部搭建完毕,如有问题请联系:[email protected]

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章