Kubernetes rpm 方式部署1.18.3单节点集群

写在前面:

K8S集群的部署需要通过手动部署多练习,以便于深入的了解他的架构和问题排查,个人建议每部署一个服务做一个快照,有助于反复折腾某一个可能不理解或者可能有问题的服务.

K8s架构图

Kubernetes主要由以下几个核心组件组成:

  • etcd保存了整个集群的状态;
  • apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
  • controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
  • scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
  • kubelet负责维护容器的生命周期,同时也负责Volume(CSI)和网络(CNI)的管理;
  • Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);
  • kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

除了核心组件,还有一些推荐的插件,其中有的已经成为CNCF中的托管项目:

  • CoreDNS负责为整个集群提供DNS服务
  • Ingress Controller为服务提供外网入口
  • Prometheus提供资源监控
  • Dashboard提供GUI
  • Federation提供跨可用区的集群

一 环境

1  服务器信息

1 设置网络
cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=status
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=db51cbf7-767a-4e16-9d1b-3453caeddf49
DEVICE=ens33
ONBOOT=yes
IPADDR=172.24.124.222
NETMASK=255.255.255.0
GATEWAY=172.24.124.1
根据自己的网络配置
 
 
2 设置主机名
hostnamectl set-hostname k8scluster
 
 
3 关闭iptables 先将IPtables关掉,后面根据业务启用iptables并放行相应的IP
systemctl stop firewalld
systemctl disable firewalld
 
4 关闭 selinux
vi /etc/selinux/config
SELINUX=disabled
 
5 关闭swap分区
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab
 
6 设置时间同步
yum install ntpdate
ntpdate time.windows.com
 
 
7 设置文件描述符
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
 
 
8 配置docker源和安装docker
cd /etc/yum.repos.d
wget https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce
mkdir /etc/docker
vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
  "max-size": "100m",
  "max-file": "3"
       }
}
systemctl start  docker
systemctl status  docker
systemctl enable  docker
 
9 配置hosts解析
 /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.24.124.222 k8scluster
 
 
10 创建集群相关目录
mkdir -p /opt/etcd/{bin,cfg,ssl}      运行etcd节点需要创建
mkdir -p /opt/kubernetes/{logs,bin,cfg,ssl}  所有集群节点都需要创建
mkdir /data/ssl -p     证书制作目录    
mkdir /data/soft       初始软件包下载目录
 
 
11 安装常用工具
yum install -y vim curl wget lrzsz

2 内核调优与内核升级

1 内核升级为最新稳定版
 
 
1) 检查安装内核依赖包
    [ ! -f /usr/bin/perl ] && yum install perl -y
2) 配置 elrepo 源
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
3) 安装维护的内核
    yum --disablerepo="*" --enablerepo="elrepo-kernel" list available --showduplicates | grep -Po '^kernel-lt.x86_64\s+\K\S+(?=.el7)'
    yum --disablerepo="*" --enablerepo=elrepo-kernel install -y kernel-lt{,-devel}
4) 修改内核启动顺序
    grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
5) 查看已安装的内核版本
    grubby --default-kernel
6) docker官方的内核检查脚本建议(RHEL7/CentOS7: User namespaces disabled; add ‘user_namespace.enable=1’ to boot command line),使用下面命令开启
    grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
7) 重启服务器
    reboot
8) 检查默认内核启动版本
    # uname -a
    Linux master-1 4.4.229-1.el7.elrepo.x86_64 #1 SMP Wed Jul 1 10:43:08 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
 
 
2 内核参数调优
vim /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
 
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.ip_forward = 1
 
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
 
 
# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
kernel.sysrq = 1
 
 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
保存退出后应用参数
modprobe br_netfilter
sysctl -p

二  安装 cfssl

1 下载安装 cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
作用: 由于kubernetes集群为了安全使用TLS通信,所以需要制作私有证书

三 创建根证书

创建 集群根ca,etcd和kubernetes都基于此ca生成证书

cd /data/ssl
1) 创建集群 ca.json
cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}
EOF
2) 创建ca.csr
cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "ShenZhen",
      "L": "ShenZhen",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
 
 
3) 生成ca证书与ca-key
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
 
 
4)拷贝到到指定的 ssl目录和其他节点指定目录
cp /data/ssl/ca* /opt/kubernetes/ssl/
 
注意设置过期时间: expiry 如果是生产环境尽可能长

四 部署单节点ETCD

1 下载二进制包

cd /data/soft
wget  https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
tar xf etcd-v3.4.9-linux-amd64.tar.g

拷贝软件包到指定运行目录 

cp /data/soft/etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

创建ETCD 证书文件

1 编辑 csr文件
cd /data/ssl/
cat > /data/ssl/etcd-csr.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "172.24.124.222"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
 
 
hosts  需要按照规划的etcd集群IP进行填写,最好是多规划几个可用IP,在etcd扩容的时候非常方便,且etcd集群为奇数个,标准为三个或者5个.
 
 
2 生成证书
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
  -ca-key=/opt/kubernetes/ssl/ca-key.pem \
  -config=/opt/kubernetes/ssl/ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
 
 
3 拷贝证书到节点指定目录
cp /data/ssl/etcd*.pem /opt/etcd/ssl/

 4  设置ETCD启动文件

cat /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
ExecStart=/opt/etcd/bin/etcd \
  --name etcd-1 \
  --cert-file=/opt/etcd/ssl/etcd.pem \
  --key-file=/opt/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/opt/etcd/ssl/etcd.pem \
  --peer-key-file=/opt/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
  --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --initial-advertise-peer-urls https://172.24.124.222:2380 \
  --listen-peer-urls https://172.24.124.222:2380 \
  --listen-client-urls https://172.24.124.222:2379,https://127.0.0.1:2379 \
  --advertise-client-urls https://172.24.124.222:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster-state=new \
  --data-dir=/var/lib/etcd \
  --listen-metrics-urls=http://0.0.0.0:2381 \
  --logger=zap
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
 
[Install]
WantedBy=multi-user.target

启动ETCD和状态查看  

1 创建 etcd工作目录
mkdir -p /var/lib/etcd
2 启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
 
 
2 查看 成员
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://172.24.124.222:2379" member list
3  查看健康状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://172.24.124.222:2379" endpoint status
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/kubernetes/ssl/ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://172.24.124.222:2379" endpoint health
4 查看暴露的metrics
curl http://10.0.2.225:2381/metrics

五 部署kube-apiserver

1 下载server软件包

cd /data/soft/
wget  https://dl.k8s.io/v1.18.3/kubernetes-server-linux-amd64.tar.gz
tar xf kubernetes-server-linux-amd64.tar.gz
删除多余文件:
cd kubernetes/server/bin
rm -rf *_tag
rm -rf *.tar
rm -rf  apiextensions-apiserver kubeadm mounter
拷贝文件到相关节点目录
cp /data/soft/kubernetes/server/bin/* /opt/kubernetes/bin/

2 生成kube-apiserver所需证书文件

1)添加 apiserver csr文件
cd /data/ssl/
cat > /data/ssl/kubernetes-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.30.0.1",
      "127.0.0.1",
      "172.24.124.222",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
注:IP 地址为所有节点IP,和apiserver高可用IP,建议在规划kubernete集群时预留一定数量的IP并且全部写入到csr中,这样可以在扩容的时候容易,如果没有预留IP那么集群扩容需要重新生成证书并替换。
10.30.0.1  集群VIP地址网关
172.24.124.222 apiserver地址
2)生成证书文件
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
3)拷贝到节点相应目录:
cp /data/ssl/kubernetes*.pem /opt/kubernetes/ssl/
注:仅master节点需要此文件

3 生成集群验证token

# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
bf20cc75c3d0fb71c58e0039750a3b7b
 
cat > /opt/kubernetes/ssl/bootstrap-token.csv << EOF
bf20cc75c3d0fb71c58e0039750a3b7b,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
 
 
注:token很重要,节点加入集群必不可少

4  创建Apiserver的配置文件  

cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--advertise-address=172.24.124.222 \\
--allow-privileged=true \\
--authorization-mode=Node,RBAC \\
--anonymous-auth=false \\
--bind-address=172.24.124.222 \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem  \\
--enable-admission-plugins=NodeRestriction \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/etcd.pem \\
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \\
--etcd-servers=https://172.24.124.222:2379 \\
--kubelet-https=true \\
--kubelet-client-certificate=/opt/kubernetes/ssl/kubernetes.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/kubernetes-key.pem \\
--secure-port=6443 \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-cluster-ip-range=10.30.0.0/16 \\
--service-node-port-range=20000-40000 \\
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \\
--token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \\
--runtime-config=api/all=true \\
--logtostderr=false \\
--log-dir=/opt/kubernetes/logs \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--v=2 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
 
注:
–logtostderr:启用日志
—v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志

5 添加systemd管理

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

6 启动kube-apiserver服务

systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver

六 部署controller-manager

1 创建 controller-manager 配置文件

创建kube-controller-manager配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-name=kubernetes \\
--cluster-cidr=10.244.0.0/16 \\
--service-cluster-ip-range=10.30.0.0/16 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"
EOF
 
注:
–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

2 添加systemd管理


cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

3 启动kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status  kube-controller-manager

七 部署Kub-Scheduler

1 创建 Scheduler 配置文件

创建kube-scheduler配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF
 
注:
–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)

2 添加 systemd管理

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

3 启动服务

systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status  kube-scheduler

八 部署kubectl

1 创建csr并生成证书

1) 创建 admin csr文件
cat > /data/ssl/admin-csr.json << EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
 
2)生成证书文件
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes admin-csr.json | cfssljson -bare admin
3)将证书文件拷贝到node节点(master可操作也可以不操作取决于master是否运行Pod)
cp /data/ssl/admin*.pem /opt/kubernetes/ssl/
 
4)下面的操作 设置集群设置集群上下文设置集群证书的真正目的是生成一个授信的 admin用户用管理集群
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://172.24.124.222:6443
 
 
/opt/kubernetes/bin/kubectl config set-credentials admin \
   --client-certificate=/opt/kubernetes/ssl/admin.pem \
   --embed-certs=true \
   --client-key=/opt/kubernetes/ssl/admin-key.pem
    
/opt/kubernetes/bin/kubectl config set-context kubernetes \
   --cluster=kubernetes \
   --user=admin
 
 
/opt/kubernetes/bin/kubectl config use-context kubernetes
 
 
注: /root/目录下创建一个.kube/config 文件 未来kubectl 与 API通讯会使用到这个文件 如果在其他几点运行kubectl 需要将这个文件拷贝过去

九 部署kubelet

1 生成 bootstrap.kubeconfig文件

1)创建clusterrolebinding 用于绑定kubelet上面可以对集群有一定的权限
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
 
2 )同kubectl设置相同目的是为了生成一个有权限的账户并生成一个配置文件用于其他节点加入集群
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://172.24.124.222:6443 \
   --kubeconfig=bootstrap.kubeconfig
    
/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
   --token=bf20cc75c3d0fb71c58e0039750a3b7b \
   --kubeconfig=bootstrap.kubeconfig  
    
    
/opt/kubernetes/bin/kubectl config set-context default \
   --cluster=kubernetes \
   --user=kubelet-bootstrap \
   --kubeconfig=bootstrap.kubeconfig
    
/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
2)将bootstrap.kubeconfig拷贝到节点指定目录
cp /data/ssl/bootstrap.kubeconfig /opt/kubernetes/cfg

创建kubelet配置文件

1)创建kubelet.conf
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8scluster \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernet
es/ssl \\
--cni-conf-dir=/etc/cni/net.d \\
--cni-bin-dir=/opt/kubernetes/bin/cni \\
--pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.1"
EOF
 
 
2)创建kubelet-config.yml文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: systemd
clusterDNS:
- 10.30.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
注:
文件重要配置,1 clusterDNS即dns的地址配置为规划地址,2 anonymous设置为false关闭不安全链接, 3 clientCAFile 配置为ca证书因为客户端也是通过ca生成证书所以可以通过认证
3)创建工作目录
mkdir -p /var/lib/kubelet

创建 systemd管理服务 

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

启动服务 

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

5 查看 csr 和 准入集群

1)查看待加入的集群(由于开启集群加入认证,现需要手动授权允许)
/opt/kubernetes/bin/kubectl get csr
2)同意所有
/opt/kubernetes/bin/kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs /opt/kubernetes/bin/kubectl certificate approve

十 部署kube-proxy

1 安装 ipvs 

yum install -y ipvsadm ipset conntrack
注:service 的转发方式有两种,第一种就是通过iptables模式转发,就是不断增减防火墙规则且转发效率慢,第二种则是同ipvs进行转发,效率上会提升很多,切不会频繁改变防火墙规则

2 生成kube-proxy证书

1)创建kube-proxy-csr.json用于生成证书
cat > /data/ssl/kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
 
2)生成证书
cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=/opt/kubernetes/ssl/ca-config.json \
   -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
 
3)将证书文件拷贝到相关节点指定目录
cp /data/ssl/kube-proxy*.pem /opt/kubernetes/ssl/

3  生成 kube-proxy.kubeconfig文件

1)生成节点认证配置文件和集群用户
方便kubectl 使用可以给kubectl创建软链接
ln -s /opt/kubernetes/bin/kubectl /usr/bin/kubectl
 
kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://172.24.124.222:6443 \
   --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
   --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
   --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
   --cluster=kubernetes \
   --user=kube-proxy \
   --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
 
2)将配置文件拷贝到相关节点指定目录
cp /data/ssl/kube-proxy.kubeconfig /opt/kubernetes/cfg/
 
 
3)创建工作目录
mkdir /var/lib/kube-proxy

4  创建系统服务

cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF

5  创建配置文件

1)创建kube-proxy.conf
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
注:每个运行kube-proxy节点都需要创建
2)创建kube-proxy-config.yml
cat <<EOF >/opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8scluster
clusterCIDR: 10.30.0.0/16
mode: ipvs
ipvs:
  scheduler: "rr"
  syncPeriod: "5s"
iptables:
  masqueradeAll: true
EOF
 
注:
-- 每个运行kube-proxy节点都需要创建
-- hostnameOverride的配置需要与kubelet保持一致
-- clusterCIDR service 地址段
-- ipvs 开启Ipvs
-- scheduler ipvs算法模式

6 启动 kube-proxy

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

7 查看 ipvs转发

ipvsadm -L -n

十一 Flannel CNI集成

1 部署cni 

1)创建cni程序文件目录 所有节点创建
mkdir /opt/kubernetes/bin/cni -p
2)下载cni
cd /data/soft/
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
 
3)解压到指定目录
tar xf cni-plugins-linux-amd64-v0.8.6.tgz  -C /opt/kubernetes/bin/cni

2 部署 flannel

1)下载flannel yml文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
2)修改网络模式:host-gw
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "host-gw"
      }
注:
-- Type 为网络模式
-- Network 为定义Pod网段
 
3)应用部署yml文件:
kubectl apply -f kube-flannel.yml
4)查看flannel pod 是否创建:
kubectl get pods -n kube-system
5)查看节点状态:
kubectl get node

3 apiserver和kubelet通信鉴权

1)创建服务账户
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF
 
2)发布rbac资源
kubectl apply -f /opt/k8s-service/apiserver-to-kubelet-rbac.yaml
 
注:
-- 当使用 kubectl 和 kubelet通信时时使用 apiserver调度,通过安全端口通信,当kubelet 设置anonymous为flase则需要授权才能进行访问,如果是true则使用只读端口 10255进行通信则不需要鉴权

十二 部署CoreDns

1 下载官方yml文件

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.base

2 修改配置文件

1)修改clusterIP:
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.30.0.2
cluster IP 需要修改定义的地址
 
 
2)修改地址后缀:
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local. in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
3)修改资源限制:
 resources:
  limits:
    memory: 500Mi
  requests:
    cpu: 100m
    memory: 70Mi
 

3 部署

1) 部署
kubectl apply -f coredns.yaml.base 
2) 查看pod
kubectl get po -n kube-system

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章