快乐的小蜜蜂 - 在家实践 K8s 的安装

当下容器技术很火,为了保持时代同步,再cento7.0 上安装了k8s:

1: 找了两台server:

192.168.122.168  k8s-master

192.168.122.234  k8s-node

上面hostname 来设置:

hostnamectl set-hostname k8s-master

hostnamectl set-hostname k8s-node

2: 关闭防火墙:

systemctl stop firewalld

systemctl disable firewalld

setenforce 0

#查看防火墙状态
firewall-cmd --state

3: 安装epel-release源

yum -y install epel-release

4: 在主机上安装: 192.168.122.168 :kubernetes Master

使用yum安装etcd、kubernetes-master:

yum -y install etcd kubernetes-master

编辑:vi /etc/etcd/etcd.conf文件,修改结果如下:

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
 

配置:vi /etc/kubernetes/apiserver文件,配置结果如下:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
 

注意: 上面最后一段把“serviceaccount ” 参数删掉了。

5:启动etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并设置开机启动。

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES ; done

6: 在etcd中定义flannel网络

etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'

》》》》》》》》以上master主机上的配置安装什么的都弄完了》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》》

接下来弄node从机上的配置安装什么的

 7 在node机上192.168.26.228安装kubernetes Node和flannel组件应用

yum -y install flannel kubernetes-node

8: 为flannel网络指定etcd服务,修改/etc/sysconfig/flanneld文件,配置结果如下图:

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.122.168:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
 

9; 修改:vi /etc/kubernetes/config文件,配置结果如下图:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.122.168:8080"
 

10:修改node机的kubelet配置文件/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.122.168"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.122.168:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
 

11: node节点机上启动kube-proxy,kubelet,docker,flanneld等服务,并设置开机启动。

for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES; done

》》》》》》以上所有master主机,node节点机上的配置就完成了,接下来看看k8s集群是否搭建起来了》》》》》》》》》》》》》》》》》》》

12: 在master主机上192.168.122.168执行如下命令,查看运行的node节点机器:

kubectl get nodes

[root@k8s-master ~]# kubectl get nodes
NAME              STATUS    AGE
192.168.122.168   Ready     2h
 

成功啦! 另外可以用; kubectl describe nodes 来看node 的运行情况:

root@k8s-master ~]# kubectl describe node
Name:            192.168.122.168
Role:            
Labels:            beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=192.168.122.168
Taints:            <none>
CreationTimestamp:    Sat, 19 Oct 2019 07:24:05 -0400
Phase:            
Conditions:
  Type            Status    LastHeartbeatTime            LastTransitionTime            Reason                Message
  ----            ------    -----------------            ------------------            ------                -------
  OutOfDisk         False     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 09:24:10 -0400     KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure     False     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 07:24:05 -0400     KubeletHasSufficientMemory     kubelet has sufficient memory available
  DiskPressure         False     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 07:24:05 -0400     KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready         True     Sat, 19 Oct 2019 09:46:13 -0400     Sat, 19 Oct 2019 09:24:20 -0400     KubeletReady             kubelet is posting ready status
Addresses:        192.168.122.168,192.168.122.168,192.168.122.168  (注意,这个有三个node ,表示有三个node, 如果一个node shutdown, 那么就会变成两个ip address)
Capacity:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                    2
 memory:                1014848Ki
 pods:                    110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:    0
 cpu:                    2
 memory:                1014848Ki
 pods:                    110
System Info:
 Machine ID:            ecb8036a93214aef8f127f89ceb8fb99
 System UUID:            ECB8036A-9321-4AEF-8F12-7F89CEB8FB99
 Boot ID:            b529266e-89cf-4d5f-9f35-54d727f522a6
 Kernel Version:        3.10.0-957.el7.x86_64
 OS Image:            CentOS Linux 7 (Core)
 Operating System:        linux
 Architecture:            amd64
 Container Runtime Version:    docker://1.13.1
 Kubelet Version:        v1.5.2
 Kube-Proxy Version:        v1.5.2
ExternalID:            192.168.122.168
Non-terminated Pods:        (0 in total)
  Namespace            Name        CPU Requests    CPU Limits    Memory Requests    Memory Limits
  ---------            ----        ------------    ----------    ---------------    -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.
  CPU Requests    CPU Limits    Memory Requests    Memory Limits
  ------------    ----------    ---------------    -------------
  0 (0%)    0 (0%)        0 (0%)        0 (0%)
Events:
  FirstSeen    LastSeen    Count    From                SubObjectPath    Type        Reason            Message
  ---------    --------    -----    ----                -------------    --------    ------            -------
  22m        22m        1    {kubelet 192.168.122.168}            Normal        Starting        Starting kubelet.
  22m        22m        1    {kubelet 192.168.122.168}            Warning        ImageGCFailed        unable to find data for container /
  22m        22m        2    {kubelet 192.168.122.168}            Normal        NodeHasSufficientDisk    Node 192.168.122.168 status is now: NodeHasSufficientDisk
  22m        22m        1    {kubelet 192.168.122.168}            Normal        NodeHasSufficientMemory    Node 192.168.122.168 status is now: NodeHasSufficientMemory
  22m        22m        1    {kubelet 192.168.122.168}            Normal        NodeHasNoDiskPressure    Node 192.168.122.168 status is now: NodeHasNoDiskPressure
  22m        22m        1    {kubelet 192.168.122.168}            Warning        Rebooted        Node 192.168.122.168 has been rebooted, boot id: b529266e-89cf-4d5f-9f35-54d727f522a6
  22m        22m        1    {kubelet 192.168.122.168}            Normal        NodeNotReady        Node 192.168.122.168 status is now: NodeNotReady
  21m        21m        1    {kubelet 192.168.122.168}            Normal        NodeReady        Node 192.168.122.168 status is now: NodeReady
[root@k8s-master ~]# 
 

具体测试方法可以reboot node server 来看master 上kubectl get node 就变成notready 了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章