Kubernetes 1.11 手動安裝並啓用ipvs

ERROR:

#很多博友說搭建之後出現認證的問題,我驗證了一下,配置是沒有寫錯的
#原因是51cto的markdown格式有點問題,代碼粘貼上來之後出現了不兼容,縮進異常的情況
#評論中出現的:error: unable to upgrade connection: Unauthorized
#其實是因爲直接複製代碼生成的/etc/kubernetes/kubelet-config.yml文件縮進有問題
#文章中已經修改了,爲了讓大家少踩點坑,這裏貼出原文:http://note.youdao.com/noteshare?id=31d9d5db79cc3ae27e72c029b09ac4ab&sub=9489CC3D8A8C44F197A8A421DC7209D7

環境介紹:

系統:Centos 7.5 1804 
內核:3.10.0-862.el7.x86_64

docker版本: 18.06.0-ce

kubernetes版本:v1.11
    master      192.168.1.1
    node1       192.168.1.2
    node2       192.168.1.3

etcd版本:v3.2.22
    etcd1       192.168.1.4
    etcd2       192.168.1.5
    etcd3       192.168.1.6

一、準備工作

爲方便操作,所有操作均以root用戶執行
以下操作僅在kubernetes集羣節點執行即可

  • 關閉selinux和防火牆
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0

systemctl disable firewalld
systemctl stop firewalld
  • 關閉swap
swapoff -a
  • 配置轉發相關參數,否則可能會出錯
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

sysctl --system
  • 加載ipvs模塊
cat << EOF > /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"
for i in \`ls \$ipvs_modules_dir | sed  -r 's#(.*).ko.xz#\1#'\`; do
    /sbin/modinfo -F filename \$i  &> /dev/null
    if [ \$? -eq 0 ]; then
        /sbin/modprobe \$i
    fi
done
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules 
bash /etc/sysconfig/modules/ipvs.modules
  • 安裝cfssl
#在master節點安裝即可!!!

wget -O /bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget -O /bin/cfssl-certinfo  https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
for cfssl in `ls /bin/cfssl*`;do chmod +x $cfssl;done;
  • 安裝docker並幹掉docker0橋
yum install docker-ce

systemctl start docker

cat << EOF > /etc/docker/daemon.json
{   "registry-mirrors": ["https://registry.docker-cn.com"],
    "live-restore": true,
    "default-shm-size": "128M",
    "bridge": "none",
    "max-concurrent-downloads": 10,
    "oom-score-adjust": -1000,
    "debug": false
}   
EOF 

systemctl restart docker

#重啓後執行ip a命令,看不到docker0的網卡即可

二、安裝etcd

  • 準備etcd證書

    在master節點上操作

mkdir -pv $HOME/ssl && cd $HOME/ssl

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

cat > etcd-ca-csr.json << EOF
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shenzhen",
      "L": "Shenzhen",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
      "127.0.0.1",
      "192.168.1.4",
      "192.168.1.5",
      "192.168.1.6"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "etcd",
            "OU": "Etcd Security"
        }
    ]
}
EOF

#生成證書並複製證書至其他etcd節點

cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

mkdir -pv /etc/etcd/ssl
cp etcd*.pem /etc/etcd/ssl

scp -r /etc/etcd 192.168.1.4:/etc/
scp -r /etc/etcd 192.168.1.5:/etc/
scp -r /etc/etcd 192.168.1.6:/etc/
  • etcd1主機安裝並啓動etcd
yum install -y etcd 

cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.4:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.4:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd1"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.4:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.4:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.4:2380,etcd2=https://192.168.1.5:2380,etcd3=https://192.168.1.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

chown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • etcd2主機安裝並啓動etcd
yum install -y etcd 

cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.5:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.5:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd2"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.5:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.5:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.4:2380,etcd2=https://192.168.1.5:2380,etcd3=https://192.168.1.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

chown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • etcd3主機安裝並啓動etcd
yum install -y etcd 

cat << EOF > /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="https://192.168.1.6:2380"
ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="etcd3"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379,https://192.168.1.6:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.4:2380,etcd2=https://192.168.1.5:2380,etcd3=https://192.168.1.6:2380"
ETCD_INITIAL_CLUSTER_TOKEN="BigBoss"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_CLIENT_CERT_AUTH="false"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_AUTO_TLS="false"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
#ETCD_PEER_CLIENT_CERT_AUTH="false"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"
EOF

chown -R etcd.etcd /etc/etcd
systemctl enable etcd
systemctl start etcd
  • 檢查集羣狀態
#在etcd1節點執行

etcdctl --endpoints "https://127.0.0.1:2379"   --ca-file=/etc/etcd/ssl/etcd-ca.pem  \
--cert-file=/etc/etcd/ssl/etcd.pem   --key-file=/etc/etcd/ssl/etcd-key.pem   cluster-health

三、準備kubernetes的證書

在master節點操作

  • 創建相關目錄
mkdir  $HOME/ssl && cd $HOME/ssl
  • 配置 root ca
cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shenzhen",
      "L": "Shenzhen",
      "O": "k8s",
      "OU": "System"
    }
  ],
  "ca": {
     "expiry": "87600h"
  }
}
EOF
  • 生成root ca
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*.pem
  • 配置kube-apiserver證書
#10.96.0.1 是 kube-apiserver 指定的 service-cluster-ip-range 網段的第一個IP

cat > kube-apiserver-csr.json << EOF
{
    "CN": "kube-apiserver",
    "hosts": [
      "127.0.0.1",
      "192.168.1.1",
      "192.168.1.2",
      "192.168.1.3",
      "10.96.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • 生成kube-apiserver證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
ls kube-apiserver*.pem
  • 配置 kube-controller-manager證書
cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
      "127.0.0.1",
      "192.168.1.1",
      "192.168.1.2",
      "192.168.1.3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "system:kube-controller-manager",
            "OU": "System"
        }
    ]
}
EOF
  • 生成kube-controller-manager證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
ls kube-controller-manager*.pem
  • 配置kube-scheduler證書
cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.1.1",
      "192.168.1.2",
      "192.168.1.3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "system:kube-scheduler",
            "OU": "System"
        }
    ]
}
EOF
  • 生成kube-scheduler證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
ls kube-scheduler*.pem
  • 配置 kube-proxy 證書
cat > kube-proxy-csr.json << EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "system:kube-proxy",
            "OU": "System"
        }
    ]
}
EOF
  • 生成 kube-proxy 證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*.pem
  • 配置 admin 證書
cat > admin-csr.json << EOF
{
    "CN": "admin",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shenzhen",
            "L": "Shenzhen",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF
  • 生成 admin 證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*.pem
  • 複製生成的證書文件,並分發至其他節點
mkdir -pv /etc/kubernetes/pki
cp ca*.pem admin*.pem kube-proxy*.pem kube-scheduler*.pem kube-controller-manager*.pem kube-apiserver*.pem /etc/kubernetes/pki
scp -r /etc/kubernetes 192.168.1.2:/etc/
scp -r /etc/kubernetes 192.168.1.3:/etc/

四、開始安裝master

  • 下載解壓server包並配置環境變量
cd /root
wget https://dl.k8s.io/v1.11.1/kubernetes-server-linux-amd64.tar.gz
tar -xf kubernetes-server-linux-amd64.tar.gz -C /usr/local
mv /usr/local/kubernetes /usr/local/kubernetes-v1.11
ln -s kubernetes-v1.11 /usr/local/kubernetes

cat > /etc/profile.d/kubernetes.sh << EOF
k8s_home=/usr/local/kubernetes
export PATH=\$k8s_home/server/bin:\$PATH
source <(kubectl completion bash)
EOF

source /etc/profile.d/kubernetes.sh
kubectl version
  • 生成kubeconfig

    • 使用 TLS Bootstrapping
    export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    
    cat > /etc/kubernetes/token.csv << EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    • 創建 kubelet bootstrapping kubeconfig
    cd /etc/kubernetes
    
    export KUBE_APISERVER="https://192.168.1.1:6443"
    
    kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=kubelet-bootstrap.conf
    
    kubectl config set-credentials kubelet-bootstrap \
    --token=${BOOTSTRAP_TOKEN} \
    --kubeconfig=kubelet-bootstrap.conf
    
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kubelet-bootstrap \
    --kubeconfig=kubelet-bootstrap.conf
    
    kubectl config use-context default --kubeconfig=kubelet-bootstrap.conf
    • 創建 kube-controller-manager kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"
    
    kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=kube-controller-manager.conf
    
    kubectl config set-credentials kube-controller-manager \
    --client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem \
    --client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.conf
    
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-controller-manager \
    --kubeconfig=kube-controller-manager.conf
    
    kubectl config use-context default --kubeconfig=kube-controller-manager.conf
    • 創建 kube-scheduler kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"
    
    kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=${KUBE_APISERVER} \
     --kubeconfig=kube-scheduler.conf
    
    kubectl config set-credentials kube-scheduler \
    --client-certificate=/etc/kubernetes/pki/kube-scheduler.pem \
    --client-key=/etc/kubernetes/pki/kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.conf
    
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-scheduler \
    --kubeconfig=kube-scheduler.conf
    
    kubectl config use-context default --kubeconfig=kube-scheduler.conf
    • 創建 kube-proxy kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"
    kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=kube-proxy.conf
    
    kubectl config set-credentials kube-proxy \
    --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
    --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.conf
    
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.conf
    
    kubectl config use-context default --kubeconfig=kube-proxy.conf
    • 創建 admin kubeconfig
    export KUBE_APISERVER="https://192.168.1.1:6443"
    
    kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=${KUBE_APISERVER} \
    --kubeconfig=admin.conf
    
    kubectl config set-credentials admin \
    --client-certificate=/etc/kubernetes/pki/admin.pem \
    --client-key=/etc/kubernetes/pki/admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.conf
    
    kubectl config set-context default \
    --cluster=kubernetes \
    --user=admin \
    --kubeconfig=admin.conf
    
    kubectl config use-context default --kubeconfig=admin.conf
    • 把 kube-proxy.conf 複製到其他節點
    scp kubelet-bootstrap.conf kube-proxy.conf 192.168.1.2:/etc/kubernetes
    scp kubelet-bootstrap.conf kube-proxy.conf 192.168.1.3:/etc/kubernetes
    cd $HOME
  • 配置啓動kube-apiserver

    • 複製 etcd ca
    mkdir -pv /etc/kubernetes/pki/etcd
    cd /etc/etcd/ssl
    cp etcd-ca.pem etcd-key.pem etcd.pem /etc/kubernetes/pki/etcd
    • 生成 service account key
    openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
    openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
    ls /etc/kubernetes/pki/sa.*
    cd $HOME
    • 啓動文件
    cat > /etc/systemd/system/kube-apiserver.service << EOF
    [Unit]
    Description=Kubernetes API Service
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/apiserver
    ExecStart=/usr/local/kubernetes/server/bin/kube-apiserver \\
          \$KUBE_LOGTOSTDERR \\
          \$KUBE_LOG_LEVEL \\
          \$KUBE_ETCD_ARGS \\
          \$KUBE_API_ADDRESS \\
          \$KUBE_SERVICE_ADDRESSES \\
          \$KUBE_ADMISSION_CONTROL \\
          \$KUBE_APISERVER_ARGS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    • 該配置文件同時被 kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy 使用
    cat > /etc/kubernetes/config << EOF
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=2"
    EOF
    
    cat > /etc/kubernetes/apiserver << EOF
    KUBE_API_ADDRESS="--advertise-address=192.168.1.1"
    KUBE_ETCD_ARGS="--etcd-servers=https://192.168.1.4:2379,https://192.168.1.5:2379,https://192.168.1.6:2379 --etcd-cafile=/etc/kubernetes/pki/etcd/etcd-ca.pem --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem"
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.96.0.0/12"
    KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
    KUBE_APISERVER_ARGS="--allow-privileged=true --authorization-mode=Node,RBAC --enable-bootstrap-token-auth=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --service-account-key-file=/etc/kubernetes/pki/sa.pub --enable-swagger-ui=true --secure-port=6443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --kubelet-client-certificate=/etc/kubernetes/pki/admin.pem --kubelet-client-key=/etc/kubernetes/pki/admin-key.pem"
    EOF
    • 啓動
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl start kube-apiserver
    systemctl status kube-apiserver
    • 訪問測試
    curl -k https://192.168.1.1:6443/
    
    出現一下內容說明搭建成功:
    {
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {
    
    },
    "status": "Failure",
    "message": "Unauthorized",
    "reason": "Unauthorized",
    "code": 401
    }
  • 配置啓動kube-controller-manager

    • 啓動文件
    cat > /etc/systemd/system/kube-controller-manager.service << EOF
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/controller-manager
    ExecStart=/usr/local/kubernetes/server/bin/kube-controller-manager \\
          \$KUBE_LOGTOSTDERR \\
          \$KUBE_LOG_LEVEL \\
          \$KUBECONFIG \\
          \$KUBE_CONTROLLER_MANAGER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    • 配置文件
    cat >/etc/kubernetes/controller-manager<<EOF
    KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-controller-manager.conf"
    KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --cluster-cidr=10.0.0.0/8 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --service-account-private-key-file=/etc/kubernetes/pki/sa.key --root-ca-file=/etc/kubernetes/pki/ca.pem --leader-elect=true --use-service-account-credentials=true --node-monitor-grace-period=10s --pod-eviction-timeout=10s --allocate-node-cidrs=true --controllers=*,bootstrapsigner,tokencleaner"
    EOF
    • 啓動
    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl start kube-controller-manager
    systemctl status kube-controller-manager
  • 配置啓動kube-scheduler

    • systemctl啓動文件
    cat > /etc/systemd/system/kube-scheduler.service << EOF
    [Unit]
    Description=Kubernetes Scheduler Plugin
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/scheduler
    ExecStart=/usr/local/kubernetes/server/bin/kube-scheduler \\
              \$KUBE_LOGTOSTDERR \\
              \$KUBE_LOG_LEVEL \\
              \$KUBECONFIG \\
              \$KUBE_SCHEDULER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    • 配置文件
    cat > /etc/kubernetes/scheduler << EOF
    KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-scheduler.conf"
    KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"
    EOF
    • 啓動
    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl start kube-scheduler
    systemctl status kube-scheduler
  • 配置kubectl
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node
  • 查看各個組件的狀態
kubectl get componentstatuses   

[root@master ~]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}
  • 配置kubelet使用bootstrap
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

五、配置cni和kubelet

  • master端操作

    • 下載cni包
    cd /root
    wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
    mkdir /opt/cni/bin -p
    tar -xf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
    • 配置啓動kubelet
    #配置啓動文件
    
    cat > /etc/systemd/system/kubelet.service << EOF
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/local/kubernetes/server/bin/kubelet \\
              \$KUBE_LOGTOSTDERR \\
              \$KUBE_LOG_LEVEL \\
              \$KUBELET_CONFIG \\
              \$KUBELET_HOSTNAME \\
              \$KUBELET_POD_INFRA_CONTAINER \\
              \$KUBELET_ARGS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    cat > /etc/kubernetes/config << EOF
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=2"
    EOF
    
    cat > /etc/kubernetes/kubelet << EOF
    KUBELET_HOSTNAME="--hostname-override=192.168.1.1"
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
    KUBELET_CONFIG="--config=/etc/kubernetes/kubelet-config.yml"
    KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cert-dir=/etc/kubernetes/pki --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d"
    EOF
    
    cat > /etc/kubernetes/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 192.168.1.1
    port: 10250
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local.
    hairpinMode: promiscuous-bridge
    serializeImagePulls: false
    authentication:
      x509:
         clientCAFile: /etc/kubernetes/pki/ca.pem
     anonymous:
        enbaled: false
     webhook:
        enbaled: false
    EOF
    • 啓動kubelet
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
    systemctl status kubelet
  • 在node1上操作

    • 下載cni包
    cd /root
    wget https://dl.k8s.io/v1.11.1/kubernetes-node-linux-amd64.tar.gz
    wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
    
    tar -xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/
    mv /usr/local/kubernetes /usr/local/kubernetes-v1.11
    ln -s kubernetes-v1.11 /usr/local/kubernetes
    mkdir /opt/cni/bin -p
    tar -xf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
    • 配置啓動kubelet
    #配置systemctl啓動文件
    cat > /etc/systemd/system/kubelet.service << EOF
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/local/kubernetes/node/bin/kubelet \\
              \$KUBE_LOGTOSTDERR \\
              \$KUBE_LOG_LEVEL \\
              \$KUBELET_CONFIG \\
              \$KUBELET_HOSTNAME \\
              \$KUBELET_POD_INFRA_CONTAINER \\
              \$KUBELET_ARGS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    cat > /etc/kubernetes/config << EOF
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=2"
    EOF
    
    cat > /etc/kubernetes/kubelet << EOF
    KUBELET_HOSTNAME="--hostname-override=192.168.1.2"
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
    KUBELET_CONFIG="--config=/etc/kubernetes/kubelet-config.yml"
    KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cert-dir=/etc/kubernetes/pki --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d"
    EOF
    
    cat > /etc/kubernetes/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 192.168.1.2
    port: 10250
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local.
    hairpinMode: promiscuous-bridge
    serializeImagePulls: false
    authentication:
      x509:
          clientCAFile: /etc/kubernetes/pki/ca.pem
      anonymous:
          enbaled: false
      webhook:
          enbaled: false
    EOF
    • 啓動kubelet
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
    systemctl status kubelet  
  • 在node2上操作

    • 下載cni包
    cd /root
    wget https://dl.k8s.io/v1.11.1/kubernetes-node-linux-amd64.tar.gz
    wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
    
    tar -xf kubernetes-node-linux-amd64.tar.gz -C /usr/local/
    mv /usr/local/kubernetes /usr/local/kubernetes-v1.11
    ln -s kubernetes-v1.11 /usr/local/kubernetes
    mkdir /opt/cni/bin -p
    tar -xf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
    • 配置啓動kubelet
    #配置systemctl啓動文件
    
    cat > /etc/systemd/system/kubelet.service << EOF
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/local/kubernetes/node/bin/kubelet \\
              \$KUBE_LOGTOSTDERR \\
              \$KUBE_LOG_LEVEL \\
              \$KUBELET_CONFIG \\
              \$KUBELET_HOSTNAME \\
              \$KUBELET_POD_INFRA_CONTAINER \\
              \$KUBELET_ARGS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    cat > /etc/kubernetes/config << EOF
    KUBE_LOGTOSTDERR="--logtostderr=true"
    KUBE_LOG_LEVEL="--v=2"
    EOF
    
    cat > /etc/kubernetes/kubelet << EOF
    KUBELET_HOSTNAME="--hostname-override=192.168.1.3"
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
    KUBELET_CONFIG="--config=/etc/kubernetes/kubelet-config.yml"
    KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cert-dir=/etc/kubernetes/pki --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d"
    EOF
    
    cat > /etc/kubernetes/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 192.168.1.3
    port: 10250
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local.
    hairpinMode: promiscuous-bridge
    serializeImagePulls: false
    authentication:
     x509:
         clientCAFile: /etc/kubernetes/pki/ca.pem
     anonymous:
         enbaled: false
     webhook:
         nbaled: false
    EOF
    • 啓動kubelet
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
    systemctl status kubelet  
  • 通過證書驗證添加各個節點
#在master節點操作

kubectl get csr

#通過驗證並添加進集羣

kubectl get csr | awk '/node/{print $1}' | xargs kubectl certificate approve

###單獨執行命令例子:
    kubectl certificate approve node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE

#查看節點
#此時節點狀態爲 NotReady,因爲還沒有配置網絡

kubectl get nodes

[root@master ~]#kubectl get nodes   
NAME          STATUS     ROLES     AGE       VERSION
192.168.1.1   NotReady   <none>    6s        v1.11.1
192.168.1.2   NotReady   <none>    7s        v1.11.1
192.168.1.3   NotReady   <none>    7s        v1.11.1

# 在node節點查看生成的文件

ls -l /etc/kubernetes/kubelet.conf
ls -l /etc/kubernetes/pki/kubelet*

六、配置kube-proxy

- 所有節點都要配置kube-proxy!!!

  • 在master節點操作

    • 安裝conntrack-tools
    yum install -y conntrack-tools
    • 啓動文件
    cat > /etc/systemd/system/kube-proxy.service << EOF
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/local/kubernetes/server/bin/kube-proxy \\
          \$KUBE_LOGTOSTDERR \\
          \$KUBE_LOG_LEVEL \\
          \$KUBECONFIG \\
          \$KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    #啓用ipvs主要就是把kube-proxy的--proxy-mode配置選項修改爲ipvs
    #並且要啓用--masquerade-all,使用iptables輔助ipvs運行
    
    cat > /etc/kubernetes/proxy << EOF
    KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-proxy.conf"
    KUBE_PROXY_ARGS="--proxy-mode=ipvs  --masquerade-all=true --cluster-cidr=10.0.0.0/8"
    EOF
    • 啓動
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy
    systemctl status kube-proxy
  • 在所有的node上操作

    • 安裝
    yum install -y conntrack-tools
    • 啓動文件
    cat > /etc/systemd/system/kube-proxy.service << EOF
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/local/kubernetes/node/bin/kube-proxy \\
          \$KUBE_LOGTOSTDERR \\
          \$KUBE_LOG_LEVEL \\
          \$KUBECONFIG \\
          \$KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    #啓用ipvs主要就是把kube-proxy的--proxy-mode配置選項修改爲ipvs
    #並且要啓用--masquerade-all,使用iptables輔助ipvs運行
    
    cat > /etc/kubernetes/proxy << EOF
    KUBECONFIG="--kubeconfig=/etc/kubernetes/kube-proxy.conf"
    KUBE_PROXY_ARGS="--proxy-mode=ipvs --masquerade-all=true --cluster-cidr=10.0.0.0/8"
    EOF
    • 啓動
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy
    systemctl status kube-proxy

七、設置集羣角色

在master節點操作

  • 設置 192.168.1.1 爲 master
kubectl label nodes 192.168.1.1 node-role.kubernetes.io/master=
  • 設置 192.168.1.2 - 3 爲 node
kubectl label nodes 192.168.1.2 node-role.kubernetes.io/node=
kubectl label nodes 192.168.1.3 node-role.kubernetes.io/node=
  • 設置 master 一般情況下不接受負載
kubectl taint nodes 192.168.1.1 node-role.kubernetes.io/master=true:NoSchedule
  • 查看節點
#此時節點狀態爲 NotReady
#ROLES已經標識出了master和node

kubectl get node

NAME          STATUS     ROLES     AGE       VERSION
192.168.1.1   NotReady   master    1m        v1.11.1
192.168.1.2   NotReady   node      1m        v1.11.1
192.168.1.3   NotReady   node      1m        v1.11.1

八、配置網絡

  • 以下網絡二選一:

    • 使用flannel網絡
    cd /root/
    mkdir flannel
    cd flannel
    wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
    
    sed -ri 's#("Network": ")10.244.0.0/16#\110.0.0.0/8#' kube-flannel.yml
    #修改kube-flannel文件中的網段爲我們需要的網段
    
    kubectl apply -f .
    • 使用canal網絡
    cd /root/
    mkdir canal
    cd canal
    
    wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
    wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml
    
    sed -ri 's#("Network": ")10.244.0.0/16#\110.0.0.0/8#' canal.yaml
    #修改cannl文件中的網段爲我們需要的網段
    
    kubectl apply -f .
  • 查看網絡容器是否爲running狀態
kubectl get -n kube-system pod -o wide  

[root@master ~]# kubectl get -n kube-system pod -o wide     
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
canal-74zhp   3/3       Running   0          7m        192.168.1.3   192.168.1.3
canal-cmz2p   3/3       Running   0          7m        192.168.1.1   192.168.1.1
canal-mkcg2   3/3       Running   0          7m        192.168.1.2   192.168.1.2
  • 查看各個節點是否爲Ready狀態
kubectl get node 

[root@master ~]# 
NAME          STATUS    ROLES     AGE       VERSION
192.168.1.1   Ready     master    5h        v1.11.1
192.168.1.2   Ready     node      5h        v1.11.1
192.168.1.3   Ready     node      5h        v1.11.1

九、配置使用coredns

#10.96.0.10 是kubelet中配置的dns
#安裝coredns

cd /root && mkdir coredns && cd coredns
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
chmod +x deploy.sh
./deploy.sh -i 10.96.0.10 > coredns.yml
kubectl apply -f coredns.yml

#查看

kubectl get svc,pods -n kube-system

[root@master coredns]# kubectl get svc,pods -n kube-system
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   2m

NAME                           READY     STATUS    RESTARTS   AGE
pod/canal-5wkkd                3/3       Running   0          17h
pod/canal-6mhhz                3/3       Running   0          17h
pod/canal-k7ccs                3/3       Running   2          17h
pod/coredns-6975654877-jpqg4   1/1       Running   0          2m
pod/coredns-6975654877-lgz9n   1/1       Running   0          2m

十、測試

  • 創建一個nginx 應用,測試應用和dns是否正常

cd /root && mkdir nginx && cd nginx

cat << EOF > nginx.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
  - port: 80
    nodePort: 31000
    name: nginx-port
    targetPort: 80
    protocol: TCP

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF     
  • 創建一個pod用來測試dns
kubectl run curl --image=radial/busyboxplus:curl -i --tty
nslookup kubernetes
nslookup nginx
curl nginx
exit

[ root@curl-87b54756-qf7l9:/ ]$ nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-87b54756-qf7l9:/ ]$ nslookup nginx
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx
Address 1: 10.105.93.85 nginx.default.svc.cluster.local
[ root@curl-87b54756-qf7l9:/ ]$ curl nginx
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
...

[ root@curl-87b54756-qf7l9:/ ]$ exit
Session ended, resume using 'kubectl attach curl-87b54756-qf7l9 -c curl -i -t' command when the pod is running
  • 到etcd節點上執行curl nodeIp:31000 測試集羣外部是否能訪問nginx
curl 192.168.1.2:31000

[root@node5 ~]# curl 192.168.1.2:31000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • 安裝ipvsadm查看ipvs規則
yum install -y ipvsadm

ipvsadm

[root@master ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  master:31000 rr
  -> 10.0.0.8:http                Masq    1      0          0
  -> 10.0.1.9:http                Masq    1      0          0
TCP  master:31000 rr
  -> 10.0.0.8:http                Masq    1      0          0
  -> 10.0.1.9:http                Masq    1      0          0
TCP  master:31000 rr
  -> 10.0.0.8:http                Masq    1      0          0
  -> 10.0.1.9:http                Masq    1      0          0
TCP  master:https rr
  -> master:sun-sr-https          Masq    1      2          0
TCP  master:domain rr
  -> 10.0.0.3:domain              Masq    1      0          0
  -> 10.0.1.3:domain              Masq    1      0          0
TCP  master:http rr
  -> 10.0.0.8:http                Masq    1      0          0
  -> 10.0.1.9:http                Masq    1      0          0
TCP  localhost:31000 rr
  -> 10.0.0.8:http                Masq    1      0          0
  -> 10.0.1.9:http                Masq    1      0          0
UDP  master:domain rr
  -> 10.0.0.3:domain              Masq    1      0          0
  -> 10.0.1.3:domain              Masq    1      0          0

Kubernetes 1.11 手動安裝並啓用ipvs

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章