安裝k8s Master高可用集羣

安裝k8s Master高可用集羣

主機 角色 組件

172.18.6.101 K8S Master Kubelet,kubectl,cni,etcd

172.18.6.102 K8S Master Kubelet,kubectl,cni,etcd

172.18.6.103 K8S Master Kubelet,kubectl,cni,etcd

172.18.6.104 K8S Worker Kubelet,cni

172.18.6.105 K8S Worker Kubelet,cni

172.18.6.106 K8S Worker Kubelet,cni

etcd安裝

保證k8smaster高可用,不建議使用container的方式啓動etcd集羣,因爲container可能會出現隨時死掉的情況,etcd每個節點的啓動service又是有狀態的。因此此處將以二進制方式進行部署,建議在正式環境中最少部署3個節點的etcd集羣,etcd具體安裝步驟參考本地服務方式搭建etcd集羣


必要組件以及證書安裝

ca證書

參考kubernetes中證書生成創建CA證書,並將ca-key.pem與ca.pem放置到k8s集羣中所有節點下的/etc/kubernetes/ssl下


woker證書製作

參考kubernetes中證書生成從節點證書生成段落,進行worker節點證書生成。對應ip的證書放置到對應worker節點的/etc/kubernetes/ssl下


kubelet.conf配置安裝

創建/etc/kubernetes/kubelet.conf內容如下:


apiVersion: v1

kind: Config

clusters:

- name: local

  cluster:

    server: https://[負載均衡IP]:[apiserver端口]

    certificate-authority: /etc/kubernetes/ssl/ca.pem

users:

- name: kubelet

  user:

    client-certificate: /etc/kubernetes/ssl/worker.pem

    client-key: /etc/kubernetes/ssl/worker-key.pem

contexts:

- context:

    cluster: local

    user: kubelet

  name: kubelet-context

current-context: kubelet-context

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

cni插件安裝

從containernetworking的cni項目中下載cni的必須二進制文件,需要放置到k8s集羣中所有節點下的/opt/cni/bin下。


後續將提供rpm包進行一鍵安裝。


kubelet服務部署

注意:後續將提供rpm包進行一鍵安裝。


將對應版本的kubelet二進制文件放置到k8s集羣中所有節點下的/usr/bin下


創建/etc/systemd/system/kubelet.service內容如下:



# /etc/systemd/system/kubelet.service


[Unit]

Description=kubelet: The Kubernetes Node Agent

Documentation=http://kubernetes.io/docs/


[Service]

Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"

Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

Environment="KUBELET_DNS_ARGS=--cluster-dns=10.100.0.10 --cluster-domain=cluster.local"

Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/shenshouer/pause-amd64:3.0"

ExecStart=

ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGS

Restart=always

StartLimitInterval=0

RestartSec=10


[Install]

WantedBy=multi-user.target

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

創建如下目錄:


/etc/kubernetes/

|-- kubelet.conf

|-- manifests

`-- ssl

   |-- ca-key.pem

   |-- ca.pem

   |-- worker.csr

   |-- worker-key.pem

   |-- worker-openssl.cnf

   `-- worker.pem

1

2

3

4

5

6

7

8

9

10


master組件安裝

配置負載均衡

配置LVS使用VIP172.18.6.254指向後端172.18.6.101、172.18.6.102、172.18.6.103, 如需簡單,則可使用nginx進行TCP4層的負載。


證書生成

openssl.cnf內容如下:


[req]

req_extensions = v3_req

distinguished_name = req_distinguished_name

[req_distinguished_name]

[ v3_req ]

basicConstraints = CA:FALSE

keyUsage = nonRepudiation, digitalSignature, keyEncipherment

subjectAltName = @alt_names

[alt_names]

DNS.1 = kubernetes

DNS.2 = kubernetes.default

DNS.3 = kubernetes.default.svc

DNS.4 = kubernetes.default.svc.cluster.local

DNS.5 = test.example.com.cn

IP.1 = 10.96.0.1

IP.2 = 172.18.6.101

IP.3 = 172.18.6.102

IP.3 = 172.18.6.103

IP.4 = 172.18.6.254

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

# 三個master的IP

IP.2 = 172.18.6.101

IP.3 = 172.18.6.102

IP.3 = 172.18.6.103

# LVS負載均衡的VIP

IP.4 = 172.18.6.254

# 可能會用到的負載均衡domain

DNS.5 = test.example.com.cn

1

2

3

4

5

6

7

8

證書生成具體步驟請參考kubernetes中證書生成 Master證書生成部分與Worker證書生成部分,生成後的證書需要放置到三臺Master節點對應路徑上


其他組件安裝

Master節點上/etc/kubernetes/manifests下放置如下三個文件


kube-apiserver.manifest:

# /etc/kubernetes/manifests/kube-apiserver.manifest

{

  "kind": "Pod",

  "apiVersion": "v1",

  "metadata": {

    "name": "kube-apiserver",

    "namespace": "kube-system",

    "creationTimestamp": null,

    "labels": {

      "component": "kube-apiserver",

      "tier": "control-plane"

    }

  },

  "spec": {

    "volumes": [

      {

        "name": "k8s",

        "hostPath": {

          "path": "/etc/kubernetes"

        }

      },

      {

        "name": "certs",

        "hostPath": {

          "path": "/etc/ssl/certs"

        }

      }

    ],

    "containers": [

      {

        "name": "kube-apiserver",

        "image": "registry.aliyuncs.com.cn/shenshouer/kube-apiserver:v1.5.2",

        "command": [

          "kube-apiserver",

          "--insecure-bind-address=127.0.0.1",

          "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",

          "--service-cluster-ip-range=10.96.0.0/12",

          "--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

          "--client-ca-file=/etc/kubernetes/ssl/ca.pem",

          "--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem",

          "--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

          "--secure-port=6443",

          "--allow-privileged",

          "--advertise-address=[當前Master節點IP]",

          "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",

          "--anonymous-auth=false",

          "--etcd-servers=http://127.0.0.1:2379"

        ],

        "resources": {

          "requests": {

            "cpu": "250m"

          }

        },

        "volumeMounts": [

          {

            "name": "k8s",

            "readOnly": true,

            "mountPath": "/etc/kubernetes/"

          },

          {

            "name": "certs",

            "mountPath": "/etc/ssl/certs"

          }

        ],

        "livenessProbe": {

          "httpGet": {

            "path": "/healthz",

            "port": 8080,

            "host": "127.0.0.1"

          },

          "initialDelaySeconds": 15,

          "timeoutSeconds": 15,

          "failureThreshold": 8

        }

      }

    ],

    "hostNetwork": true

  },

  "status": {}

}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

kube-controller-manager.manifest


{

 "kind": "Pod",

 "apiVersion": "v1",

 "metadata": {

   "name": "kube-controller-manager",

   "namespace": "kube-system",

   "creationTimestamp": null,

   "labels": {

     "component": "kube-controller-manager",

     "tier": "control-plane"

   }

 },

 "spec": {

   "volumes": [

     {

       "name": "k8s",

       "hostPath": {

         "path": "/etc/kubernetes"

       }

     },

     {

       "name": "certs",

       "hostPath": {

         "path": "/etc/ssl/certs"

       }

     }

   ],

   "containers": [

     {

       "name": "kube-controller-manager",

       "image": "registry.aliyuncs.com/shenshouer/kube-controller-manager:v1.5.2",

       "command": [

         "kube-controller-manager",

         "--address=127.0.0.1",

         "--leader-elect",

         "--master=127.0.0.1:8080",

         "--cluster-name=kubernetes",

         "--root-ca-file=/etc/kubernetes/ssl/ca.pem",

         "--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",

         "--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem",

         "--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem",

         "--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap",

         "--allocate-node-cidrs=true",

         "--cluster-cidr=10.244.0.0/16"

       ],

       "resources": {

         "requests": {

           "cpu": "200m"

         }

       },

       "volumeMounts": [

         {

           "name": "k8s",

           "readOnly": true,

           "mountPath": "/etc/kubernetes/"

         },

         {

           "name": "certs",

           "mountPath": "/etc/ssl/certs"

         }

       ],

       "livenessProbe": {

         "httpGet": {

           "path": "/healthz",

           "port": 10252,

           "host": "127.0.0.1"

         },

         "initialDelaySeconds": 15,

         "timeoutSeconds": 15,

         "failureThreshold": 8

       }

     }

   ],

   "hostNetwork": true

 },

 "status": {}

}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77


kube-scheduler.manifest


{

 "kind": "Pod",

 "apiVersion": "v1",

 "metadata": {

   "name": "kube-scheduler",

   "namespace": "kube-system",

   "creationTimestamp": null,

   "labels": {

     "component": "kube-scheduler",

     "tier": "control-plane"

   }

 },

 "spec": {

   "containers": [

     {

       "name": "kube-scheduler",

       "image": "registry.aliyuncs.com/shenshouer/kube-scheduler:v1.5.2",

       "command": [

         "kube-scheduler",

         "--address=127.0.0.1",

         "--leader-elect",

         "--master=127.0.0.1:8080"

       ],

       "resources": {

         "requests": {

           "cpu": "100m"

         }

       },

       "livenessProbe": {

         "httpGet": {

           "path": "/healthz",

           "port": 10251,

           "host": "127.0.0.1"

         },

         "initialDelaySeconds": 15,

         "timeoutSeconds": 15,

         "failureThreshold": 8

       }

     }

   ],

   "hostNetwork": true

 },

 "status": {}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

其他組件安裝

kube-proxy安裝

在任意master上執行kubectl create -f kube-proxy-ds.yaml,其中kube-proxy-ds.yaml內容如下:


apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  labels:

    component: kube-proxy

    k8s-app: kube-proxy

    kubernetes.io/cluster-service: "true"

    name: kube-proxy

    tier: node

  name: kube-proxy

  namespace: kube-system

spec:

  selector:

    matchLabels:

      component: kube-proxy

      k8s-app: kube-proxy

      kubernetes.io/cluster-service: "true"

      name: kube-proxy

      tier: node

  template:

    metadata:

      labels:

        component: kube-proxy

        k8s-app: kube-proxy

        kubernetes.io/cluster-service: "true"

        name: kube-proxy

        tier: node

    spec:

      containers:

      - command:

        - kube-proxy

        - --kubeconfig=/run/kubeconfig

        - --cluster-cidr=10.244.0.0/16

        image: registry.aliyuncs.com/shenshouer/kube-proxy:v1.5.2

        imagePullPolicy: IfNotPresent

        name: kube-proxy

        resources: {}

        securityContext:

          privileged: true

        terminationMessagePath: /dev/termination-log

        volumeMounts:

        - mountPath: /var/run/dbus

          name: dbus

        - mountPath: /run/kubeconfig

          name: kubeconfig

        - mountPath: /etc/kubernetes/ssl

          name: ssl

      dnsPolicy: ClusterFirst

      hostNetwork: true

      restartPolicy: Always

      securityContext: {}

      terminationGracePeriodSeconds: 30

      volumes:

      - hostPath:

          path: /etc/kubernetes/kubelet.conf

        name: kubeconfig

      - hostPath:

          path: /var/run/dbus

        name: dbus

      - hostPath:

          path: /etc/kubernetes/ssl

        name: ssl

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

網絡組件安裝

在任意master上執行kubectl apply -f kube-flannel.yaml,其中kube-flannel.yaml內容如下,注意,如果是在vagrant啓動的虛擬機中運行,請修改flannled啓動參數將--iface指向具體通訊網卡


---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: flannel

  namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

  namespace: kube-system

  name: kube-flannel-cfg

  labels:

    tier: node

    app: flannel

data:

  cni-conf.json: |

    {

      "name": "cbr0",

      "type": "flannel",

      "delegate": {

        "ipMasq": true,

        "bridge": "cbr0",

        "hairpinMode": true,

        "forceAddress": true,

        "isDefaultGateway": true

      }

    }

  net-conf.json: |

    {

      "Network": "10.244.0.0/16",

      "Backend": {

        "Type": "vxlan"

      }

    }

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  namespace: kube-system

  name: kube-flannel-ds

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true

      nodeSelector:

        beta.kubernetes.io/arch: amd64

      serviceAccountName: flannel

      containers:

      - name: kube-flannel

        image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0

        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth0" ]

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      - name: install-cni

        image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0

        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

DNS部署

在任意master上執行kubectl create -f skydns.yaml,其中skydns.yaml內容如下


apiVersion: v1

kind: Service

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

    kubernetes.io/name: "KubeDNS"

spec:

  selector:

    k8s-app: kube-dns

  clusterIP: 10.100.0.10

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP


---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: kube-dns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

spec:

  # replicas: not specified here:

  # 1. In order to make Addon Manager do not reconcile this replicas parameter.

  # 2. Default is 1.

  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

  strategy:

    rollingUpdate:

      maxSurge: 10%

      maxUnavailable: 0

  selector:

    matchLabels:

      k8s-app: kube-dns

  template:

    metadata:

      labels:

        k8s-app: kube-dns

      annotations:

        scheduler.alpha.kubernetes.io/critical-pod: ''

        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'

    spec:

      containers:

      - name: kubedns

        image: registry.aliyuncs.com/shenshouer/kubedns-amd64:1.9

        resources:

          # TODO: Set memory limits when we've profiled the container for large

          # clusters, then set request = limit to keep this container in

          # guaranteed class. Currently, this container falls into the

          # "burstable" category so the kubelet doesn't backoff from restarting it.

          limits:

            memory: 170Mi

          requests:

            cpu: 100m

            memory: 70Mi

        livenessProbe:

          httpGet:

            path: /healthz-kubedns

            port: 8080

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        readinessProbe:

          httpGet:

            path: /readiness

            port: 8081

            scheme: HTTP

          # we poll on pod startup for the Kubernetes master service and

          # only setup the /readiness HTTP server once that's available.

          initialDelaySeconds: 3

          timeoutSeconds: 5

        args:

        - --domain=cluster.local.

        - --dns-port=10053

        - --config-map=kube-dns

        # This should be set to v=2 only after the new image (cut from 1.5) has

        # been released, otherwise we will flood the logs.

        - --v=0

        - --federations=myfederation=federation.test

        env:

        - name: PROMETHEUS_PORT

          value: "10055"

        ports:

        - containerPort: 10053

          name: dns-local

          protocol: UDP

        - containerPort: 10053

          name: dns-tcp-local

          protocol: TCP

        - containerPort: 10055

          name: metrics

          protocol: TCP

      - name: dnsmasq

        image: registry.aliyuncs.com/shenshouer/kube-dnsmasq-amd64:1.4

        livenessProbe:

          httpGet:

            path: /healthz-dnsmasq

            port: 8080

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        args:

        - --cache-size=1000

        - --no-resolv

        - --server=127.0.0.1#10053

        - --log-facility=-

        ports:

        - containerPort: 53

          name: dns

          protocol: UDP

        - containerPort: 53

          name: dns-tcp

          protocol: TCP

        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details

        resources:

          requests:

            cpu: 150m

            memory: 10Mi

      - name: dnsmasq-metrics

        image: registry.aliyuncs.com/shenshouer/dnsmasq-metrics-amd64:1.0

        livenessProbe:

          httpGet:

            path: /metrics

            port: 10054

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        args:

        - --v=2

        - --logtostderr

        ports:

        - containerPort: 10054

          name: metrics

          protocol: TCP

        resources:

          requests:

            memory: 10Mi

      - name: healthz

        image: registry.aliyuncs.com/shenshouer/exechealthz-amd64:1.2

        resources:

          limits:

            memory: 50Mi

          requests:

            cpu: 10m

            # Note that this container shouldn't really need 50Mi of memory. The

            # limits are set higher than expected pending investigation on #29688.

            # The extra memory was stolen from the kubedns container to keep the

            # net memory requested by the pod constant.

            memory: 50Mi

        args:

        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null

        - --url=/healthz-dnsmasq

        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null

        - --url=/healthz-kubedns

        - --port=8080

        - --quiet

        ports:

        - containerPort: 8080

          protocol: TCP

      dnsPolicy: Default  # Don't use cluster DNS.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

Node節點安裝

Docker安裝


新建/etc/kubernetes/目錄


|-- kubelet.conf

|-- manifests

`-- ssl

  |-- ca-key.pem

  |-- ca.pem

  |-- ca.srl

  |-- worker.csr

  |-- worker-key.pem

  |-- worker-openssl.cnf

  `-- worker.pem

1

2

3

4

5

6

7

8

9

10

新建/etc/kubernetes/kubelet.conf配置,參考kubelet.conf配置


新建/etc/kubernetes/ssl,證書製作參考worker證書製作


新建/etc/kubernetes/manifests


新建/opt/cni/bin,安裝CNI參考cni安裝步驟


安裝kubelet,參考kubelet安裝


systemctl enable kubelet && systemctl restart kubelet && journalctl -fu kubelet

本文轉自CSDN-安裝k8s Master高可用集羣

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章