k8s集群部署v1.15实践12:work节点部署kube-proxy

work节点部署kube-proxy

注:二进制文件前面已经下载分发好

1.创建kube-proxy证书和密钥

创建签名请求

[root@k8s-node1 kube-proxy]# cat kube-proxy-csr.json 
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "SZ",
"L": "SZ",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
[root@k8s-node1 kube-proxy]#

CN:指定该证书的 User 为 system:kube-proxy .预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserverProxy 相关 API 的权限.该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空.

生成证书和密钥

[root@k8s-node1 kube-proxy]#  cfssl gencert -ca=/etc/kubernetes/cert/ca.pem -ca-key=/etc/kubernetes/cert/ca-key.pem -config=/etc/kubernetes/cert/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/11/05 21:03:03 [INFO] generate received request
2019/11/05 21:03:03 [INFO] received CSR
2019/11/05 21:03:03 [INFO] generating key: rsa-2048
2019/11/05 21:03:04 [INFO] encoded CSR
2019/11/05 21:03:04 [INFO] signed certificate with serial number 257083627823849004077905552203274968448941860993
2019/11/05 21:03:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-node1 kube-proxy]# ls
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
[root@k8s-node1 kube-proxy]#

2.创建和分发kubeconfig文件

创建kubeconfig文件

[root@k8s-node1 kube-proxy]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/cert/ca.pem --embed-certs=true --server=https://192.168.174.127:8443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
[root@k8s-node1 kube-proxy]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@k8s-node1 kube-proxy]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
[root@k8s-node1 kube-proxy]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
[root@k8s-node1 kube-proxy]# ls |grep config
kube-proxy.kubeconfig
[root@k8s-node1 kube-proxy]#

分发kubeconfig文件

[root@k8s-node1 kube-proxy]# cp kube-proxy.kubeconfig /etc/kubernetes/
[root@k8s-node1 kube-proxy]# scp kube-proxy.kubeconfig root@k8s-node2:/etc/kubernetes/
kube-proxy.kubeconfig                                                                                        100% 6219     5.4MB/s   00:00    
[root@k8s-node1 kube-proxy]# scp kube-proxy.kubeconfig root@k8s-node3:/etc/kubernetes/
kube-proxy.kubeconfig 

3.创建kube-proxy config文件

模板

[root@k8s-node1 kube-proxy]# cat kube-proxy.config.yaml.template 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: ${CLUSTER_CIDR}
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
[root@k8s-node1 kube-proxy]#

bindAddress:监听地址.

clientConnection.kubeconfig:连接 apiserver 的 kubeconfig 文件.

clusterCIDR:kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问Service IP 的请求做 SNAT.

hostnameOverride : 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则.

mode:使用 ipvs 模式.

修改变量

[root@k8s-node1 kube-proxy]# echo ${CLUSTER_CIDR}
172.30.0.0/16
[root@k8s-node1 kube-proxy]# cat kube-proxy.config.yaml.template 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ##NODE_IP##
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: ##NODE_IP##:10256
hostnameOverride: ##NODE_NAME##
kind: KubeProxyConfiguration
metricsBindAddress: ##NODE_IP##:10249
mode: "ipvs"
[root@k8s-node1 kube-proxy]#

分发

[root@k8s-node1 kube-proxy]# cp kube-proxy.config.yaml.template /etc/kubernetes/kube-proxy.config.yaml
[root@k8s-node1 kube-proxy]# scp kube-proxy.config.yaml.template root@k8s-node2:/etc/kubernetes/kube-proxy.config.yaml
kube-proxy.config.yaml.template                                                                              100%  315   283.0KB/s   00:00    
[root@k8s-node1 kube-proxy]# scp kube-proxy.config.yaml.template root@k8s-node3:/etc/kubernetes/kube-proxy.config.yaml
kube-proxy.config.yaml.template                                                                              100%  315   326.6KB/s   00:00    
[root@k8s-node1 kube-proxy]#

修改NODE_IP和NODE_NAME,所有节点的都根据节点的ip和hostname修改.

sed -i -e 's/##NODE_IP##/192\.168\.174\.128/g'  -e 's/##NODE_NAME##/k8s\-node1/g' /etc/kubernetes/kube-proxy.config.yaml

创建和分发kube-proxy systemd unit 文件

[root@k8s-node1 kube-proxy]# cat kube-proxy.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

手动创建目录WorkingDirectory=/var/lib/kube-proxy

[root@k8s-node1 kube-proxy]# mkdir -p /var/lib/kube-proxy
[root@k8s-node1 kube-proxy]# ssh root@k8s-node2 "mkdir -p /var/lib/kube-proxy"
[root@k8s-node1 kube-proxy]# ssh root@k8s-node3 "mkdir -p /var/lib/kube-proxy"

分发文件

[root@k8s-node1 kube-proxy]# cp kube-proxy.service /etc/systemd/system
[root@k8s-node1 kube-proxy]# scp kube-proxy.service root@k8s-node2:/etc/systemd/system
kube-proxy.service                                                                                           100%  450   525.1KB/s   00:00    
[root@k8s-node1 kube-proxy]# scp kube-proxy.service root@k8s-node3:/etc/systemd/system
kube-proxy.service 

加上执行权限

chmod +x -R /etc/systemd/system

4.启动服务

systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy

启动报错

[root@k8s-node1 kubernetes]# cat kube-proxy.ERROR 
Log file created at: 2019/11/05 21:56:48
Running on machine: k8s-node1
Binary: Built with gc go1.12.10 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
F1105 21:56:48.913044   30996 server.go:449] unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined

文件格式问题,注意参考格式见下

[root@k8s-master1 kubernetes]# cat /etc/kubernetes/kube-proxy.config.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.211.128
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 192.168.211.128:10256
hostnameOverride: k8s-master1
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.211.128:10249
mode: "ipvs"
[root@k8s-master1 kubernetes]#
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig

注意这个前面的空格,没有就会报上面的错误

再启动,服务起来了

[root@k8s-node1 kubernetes]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-11-05 21:59:54 EST; 8s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 32608 (kube-proxy)
    Tasks: 0
   Memory: 10.6M
   CGroup: /system.slice/kube-proxy.service
           ‣ 32608 /opt/k8s/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.config.yaml --alsologtostderr=true --logtostderr=false --log-...

Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931228   32608 config.go:187] Starting service config controller
Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931248   32608 controller_utils.go:1029] Waiting for caches to sync for s...troller
Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931422   32608 config.go:96] Starting endpoints config controller
Nov 05 21:59:54 k8s-node1 kube-proxy[32608]: I1105 21:59:54.931431   32608 controller_utils.go:1029] Waiting for caches to sync for e...troller
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032212   32608 controller_utils.go:1036] Caches are synced for endpoints ...troller
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032320   32608 proxier.go:748] Not syncing ipvs rules until Services and ... master
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032338   32608 controller_utils.go:1036] Caches are synced for service co...troller
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032376   32608 service.go:332] Adding new service port "default/httpd-svc...:80/TCP
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.032393   32608 service.go:332] Adding new service port "default/kubernete...443/TCP
Nov 05 21:59:55 k8s-node1 kube-proxy[32608]: I1105 21:59:55.075261   32608 proxier.go:1797] Opened local port "nodePort for default/h...36/tcp)
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node1 kubernetes]#
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章