二進制安裝k8s - 0.7 node安裝 kubelet、kube-proxy 、cni plugins

二進制安裝k8s - 0.7 node安裝 kubelet、kube-proxy 、cni plugins


創建 node 相關目錄

mkdir -p /data/k8s/{kubelet,kube-proxy,cni,bin,cert}
mkdir -p /data/k8s/cni/net.d/

下載 kubelet,kube-proxy 二進制和基礎 cni plugins

[root@node bin]# ls
bridge  host-local  kubelet  kube-proxy  loopback

從master拉取 ca 文件。

[root@node cert]# scp 192.168.100.59:/data/k8s/cert/{ca.pem,ca-key.pem,ca-config.json} /data/k8s/cert/
[root@node cert]# ls
ca-config.json  ca-key.pem  ca.pem




準備 cni配置文件

vim /data/k8s/cni/net.d/10-default.conf

{
	"name": "mynet",
   "cniVersion": "0.3.1",
	"type": "bridge",
	"bridge": "mynet0",
	"isDefaultGateway": true,
	"ipMasq": true,
	"hairpinMode": true,
	"ipam": {
		"type": "host-local",
		"subnet": "{{ pod_cni }}"
	}
}

注: {{ pod_cni }} 爲pod所能用的網段 我這裏設置爲 10.244.0.0/16
{這裏的地址可以設置爲 10.244.(所在主機ip最後位).0/24,我這爲 10.244.60.0/24 這樣可以看到podip即可知道pod所在主機ip



開啓 ipvs

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4




kubelet 配置部分

下面直接在master生成 kubelet.kubeconfig 文件,再傳到對應的 node 上。

準備kubelet 證書籤名請求

master上操作
mkdir -p /data/k8s/node/100.60
vim /data/k8s/node/100.60/kubelet-csr.json

{
  "CN": "system:node:192.168.100.60",
  "hosts": [
    "127.0.0.1",
    "192.168.100.60"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "SiChuan",
      "L": "ChengDu",
      "O": "system:nodes",
      "OU": "Lswzw"
    }
  ]
}

注:

  • 上面的ip須要換爲node主機ip。
創建 kubelet 證書與私鑰

cd /data/k8s/node/100.60

cfssl gencert \
  -ca=/data/k8s/cert/ca.pem \
  -ca-key=/data/k8s/cert/ca-key.pem \
  -config=/data/k8s/cert/ca-config.json \
  -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet
創建kubelet.kubeconfig
kubectl config set-cluster kubernetes \
  --certificate-authority=/data/k8s/cert/ca.pem \
  --embed-certs=true \
  --server={{ KUBE_APISERVER }} \
  --kubeconfig=kubelet.kubeconfig

注: {{ KUBE_APISERVER }} 我這爲: https://192.168.100.59:6443

設置客戶端認證參數
kubectl config set-credentials system:node:{{ node_ip }} \
  --client-certificate=/data/k8s/node/100.60/kubelet.pem \
  --embed-certs=true \
  --client-key=/data/k8s/node/100.60/kubelet-key.pem \
  --kubeconfig=kubelet.kubeconfig

注: {{ node_ip }} 這裏爲要生成node的ip。 我這爲 192.168.100.60

設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=system:node:{{ node_ip }} \
  --kubeconfig=kubelet.kubeconfig

注: {{ node_ip }} 這裏爲要生成node的ip。 我這爲 192.168.100.60

選擇默認上下文
kubectl config use-context default \
  --kubeconfig=kubelet.kubeconfig
拷貝CA && kubelet.kubeconfig 到對應節點
scp kubelet.pem 192.168.100.60:/data/k8s/kubelet/
scp kubelet-key.pem 192.168.100.60:/data/k8s/kubelet/
scp kubelet.kubeconfig 192.168.100.60:/data/k8s/kubelet/

創建對應節點用戶權限

這個很重要,不然無法創建pod
最後name 是根據 “設置客戶端認證參數” 裏面設置的參數
vim node60.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: basic-auth-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:node:192.168.100.60

。。。下面爲node節點操作。。。

創建kubelet的配置文件

vim /data/k8s/kubelet/kubelet-config.yaml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: {{ node_ip }}
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /data/k8s/cert/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.44.0.2
clusterDomain: cluster.local.
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 3 
containerLogMaxSize: 10Mi
enforceNodeAllocatable:
- pods
- kube-reserved
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 200Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 40s
hairpinMode: hairpin-veth 
healthzBindAddress: {{ node_ip }}
healthzPort: 10248
httpCheckFrequency: 40s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
kubeReservedCgroup: /system.slice/kubelet.service
kubeReserved: {'cpu':'200m','memory':'500Mi','ephemeral-storage':'1Gi'}
kubeAPIBurst: 100
kubeAPIQPS: 50
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 1m0s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
# disable readOnlyPort 
readOnlyPort: 0
resolvConf: /etc/resolv.conf
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
tlsCertFile: /data/k8s/kubelet/kubelet.pem
tlsPrivateKeyFile: /data/k8s/kubelet/kubelet-key.pem

注: {{ node_ip }} 爲當前節點的ip。 這裏爲 192.168.100.60

創建kubelet的 systemd 文件

vim /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
WorkingDirectory=/data/k8s/kubelet
ExecStartPre=/bin/mount -o remount,rw '/sys/fs/cgroup'
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service
ExecStartPre=/bin/mkdir -p /sys/fs/cgroup/pids/system.slice/kubelet.service
ExecStart=/data/k8s/bin/kubelet \
  --config=/data/k8s/kubelet/kubelet-config.yaml \
  --cni-bin-dir=/data/k8s/bin \
  --cni-conf-dir=/data/k8s/cni/net.d \
  --hostname-override={{ node_name }} \
  --kubeconfig=/data/k8s/kubelet/kubelet.kubeconfig \
  --network-plugin=cni \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 \
  --root-dir=/data/k8s/kubelet \
  --v=2
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

注: {{ node_name }} 爲當get node 時顯示的name。 這裏爲 node01

啓用kubelet 服務
systemctl start kubelet
systemctl status kubelet
systemctl enable kubelet




kube-proxy配置部分

從master上拉取kube-proxy.kubeconfig 文件

這個文件在03裏面已經生成

scp 192.168.100.59:/data/k8s/conf/kube-proxy.kubeconfig /data/k8s/kube-proxy/
創建kube-proxy 的systemd 文件

vim /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/data/k8s/kube-proxy
ExecStart=/data/k8s/bin/kube-proxy \
  --bind-address={{ node_ip }} \
  --cluster-cidr=10.244.0.0/16 \
  --hostname-override={{ node_name }} \
  --kubeconfig=/data/k8s/kube-proxy/kube-proxy.kubeconfig \
  --logtostderr=true \
  --proxy-mode=ipvs
Restart=always
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

注:

  • kube-proxy 根據 --cluster-cidr 判斷集羣內部和外部流量,指定 --cluster-cidr 或 --masquerade-all 選項後,kube-proxy 會對訪問 Service IP 的請求做 SNAT
  • {{ node_ip }} 爲node主機ip 這裏我的是 192.168.100.60
  • {{ node_name }} 爲顯示的node節點名 我這裏是 node01
啓用kube-proxy 服務
systemctl start kube-proxy
systemctl status kube-proxy
systemctl enable kube-proxy




master 上驗證服務

[root@master conf]# kubectl get node
NAME     STATUS   ROLES    AGE   VERSION
node01   Ready    <none>   24m   v1.15.6
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章