k8s上搭建zookeeper集羣和kafka集羣

k8s上搭建zookeeper集羣和kafka集羣

一.zookeeper集羣搭建

1.原計劃是在一個development下創建三個副本,後來發現不行,如果是一個development,那麼三個副本的yml是一模一樣的,而zookeeper集羣中每個節點的myid不能一樣,故需要創建三份development
2.集羣節點之間需要通信,就需要建立service,但是不能經service的負載均衡,需要直達pod,有兩種法方法可以實現

  • 1)service建成headless
    service,這樣就可以通過pod的名字直接找到pod,配置server.1=zookeeper1:2888:3888;2181即可到達pod,其中zookeeper1是service名稱

  • 2)service建成普通service,使用server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181,可以到達pod

3.爲了簡單起見,沒有使用nfs,如果需要自行加上掛載
4.順便列出下zookeeper的作用,方便自己記憶

  1. 數據發佈與訂閱(配置中心)
  2. 負載均衡
  3. 命名服務(Naming Service)
  4. 分佈式通知/協調
  5. 集羣管理與Master選舉
  6. 分佈式鎖
  7. 分佈式隊列

前提:創建namespace kafka,我們構建的內容都放到這裏面

1.node1


---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: None
    k8s.eip.work/workload: zookeeper1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper1
  name: zookeeper1
  namespace: kafka
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: zookeeper1
  serviceName: zookeeper1
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: zookeeper1
    spec:
      containers:
        - env:
          - name: ZOO_SERVERS
            value: >-
              #下面兩種方法都可以,一種直接到pod,一種到headless servcie(下面只有一個pod)
              #server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181
              #server.2=zookeeper2-0.zookeeper2.kafka.svc.cluster.local:2888:3888;2181
              #server.3=zookeeper3-0.zookeeper3.kafka.svc.cluster.local:2888:3888;2181
              server.1=zookeeper1:2888:3888;2181
              server.2=zookeeper2:2888:3888;2181
              server.3=zookeeper3:2888:3888;2181
          - name: ZOO_MY_ID
            value: '1'
          image: zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181
              name: client
              protocol: TCP
            - containerPort: 2888
              name: server
              protocol: TCP
            - containerPort: 3888
              name: leader-election
              protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: zookeeper1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper1
  name: zookeeper1
  namespace: kafka
spec:
  clusterIP: None
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper1
  type: ClusterIP



2.node2


---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: None
    k8s.eip.work/workload: zookeeper2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper2
  name: zookeeper2
  namespace: kafka
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: zookeeper2
  serviceName: zookeeper2
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: zookeeper2
    spec:
      containers:
        - env:
          - name: ZOO_SERVERS
            value: >-
              #下面兩種方法都可以,一種直接到pod,一種到headless servcie(下面只有一個pod)
              #server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181
              #server.2=zookeeper2-0.zookeeper2.kafka.svc.cluster.local:2888:3888;2181
              #server.3=zookeeper3-0.zookeeper3.kafka.svc.cluster.local:2888:3888;2181
              server.1=zookeeper1:2888:3888;2181
              server.2=zookeeper2:2888:3888;2181
              server.3=zookeeper3:2888:3888;2181
          - name: ZOO_MY_ID
            value: '2'
          image: zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181
              name: client
              protocol: TCP
            - containerPort: 2888
              name: server
              protocol: TCP
            - containerPort: 3888
              name: leader-election
              protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: zookeeper2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper2
  name: zookeeper2
  namespace: kafka
spec:
  clusterIP: None
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper2
  type: ClusterIP



3.node3

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: None
    k8s.eip.work/workload: zookeeper3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper3
  name: zookeeper3
  namespace: kafka
spec:
  podManagementPolicy: OrderedReady
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: zookeeper3
  serviceName: zookeeper3
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: zookeeper3
    spec:
      containers:
        - env:
          - name: ZOO_SERVERS
            value: >-
              #下面兩種方法都可以,一種直接到pod,一種到headless servcie(下面只有一個pod)
              #server.1=zookeeper1-0.zookeeper1.kafka.svc.cluster.local:2888:3888;2181
              #server.2=zookeeper2-0.zookeeper2.kafka.svc.cluster.local:2888:3888;2181
              #server.3=zookeeper3-0.zookeeper3.kafka.svc.cluster.local:2888:3888;2181
              server.1=zookeeper1:2888:3888;2181
              server.2=zookeeper2:2888:3888;2181
              server.3=zookeeper3:2888:3888;2181
          - name: ZOO_MY_ID
            value: '3'
          image: zookeeper
          imagePullPolicy: IfNotPresent
          name: zookeeper
          ports:
            - containerPort: 2181
              name: client
              protocol: TCP
            - containerPort: 2888
              name: server
              protocol: TCP
            - containerPort: 3888
              name: leader-election
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: zookeeper3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper3
  name: zookeeper3
  namespace: kafka
spec:
  clusterIP: None
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: zookeeper3
  type: ClusterIP

4.驗證是否成功

Mode: follower或leader說明集羣部署成功

root@zookeeper1-0:/apache-zookeeper-3.6.1-bin# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

二.kafka集羣搭建

1.如果broker.id一定要指定,否則每次啓動會自動生成,很有問題,有可能找不到分區leader
2.注意:name=kafka會有如下問題,但是可以通過增加環境變量KAFKA_PORT來解決,或者不要叫kafka

rg.apache.kafka.common.config.ConfigException: Invalid value tcp://10.0.35.234:9092 for configuration port: Not a number of type INT

1.node1

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kafka1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka1
  name: kafka1
  namespace: kafka
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: kafka1
  serviceName: kafka1
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: kafka1
    spec:
      containers:
        - env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper1:2181,zookeeper2:2181,zookeeper3:2181'
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092'
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://192.168.100.16:31367'
            - name: KAFKA_BROKER_ID
              value: '1'
          image: wurstmeister/kafka
          imagePullPolicy: IfNotPresent
          name: kafka
          ports:
            - containerPort: 9092
              protocol: TCP
            - containerPort: 1099
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: kafka1
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka1
  name: kafka1
  namespace: kafka
spec:
  ports:
    - name: zhnz8q
      nodePort: 31367
      port: 9092
      protocol: TCP
      targetPort: 9092
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka1
  sessionAffinity: None
  type: NodePort




2.node2

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kafka2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  name: kafka2
  namespace: kafka
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: kafka2
  serviceName: kafka2
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: kafka2
    spec:
      containers:
        - env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper1:2181,zookeeper2:2181,zookeeper3:2181'
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092'
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://192.168.100.16:31368'
            - name: KAFKA_BROKER_ID
              value: '2'
          image: wurstmeister/kafka
          imagePullPolicy: IfNotPresent
          name: kafka
          ports:
            - containerPort: 9092
              protocol: TCP
            - containerPort: 1099
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: kafka2
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  name: kafka2
  namespace: kafka
spec:
  ports:
    - nodePort: 31368
      port: 9092
      protocol: TCP
      targetPort: 9092
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  sessionAffinity: None
  type: NodePort

3.node3

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    k8s.eip.work/ingress: 'false'
    k8s.eip.work/service: NodePort
    k8s.eip.work/workload: kafka3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka3
  name: kafka3
  namespace: kafka
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s.eip.work/layer: cloud
      k8s.eip.work/name: kafka3
  serviceName: kafka3
  template:
    metadata:
      labels:
        k8s.eip.work/layer: cloud
        k8s.eip.work/name: kafka3
    spec:
      containers:
        - env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: 'zookeeper1:2181,zookeeper2:2181,zookeeper3:2181'
            - name: KAFKA_LISTENERS
              value: 'PLAINTEXT://:9092'
            - name: KAFKA_ADVERTISED_LISTENERS
              value: 'PLAINTEXT://192.168.100.16:31369'
            - name: KAFKA_BROKER_ID
              value: '3'
          image: wurstmeister/kafka
          imagePullPolicy: IfNotPresent
          name: kafka
          ports:
            - containerPort: 9092
              protocol: TCP
            - containerPort: 1099
              protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    k8s.eip.work/workload: kafka3
  labels:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka2
  name: kafka3
  namespace: kafka
spec:
  ports:
    - nodePort: 31369
      port: 9092
      protocol: TCP
      targetPort: 9092
  selector:
    k8s.eip.work/layer: cloud
    k8s.eip.work/name: kafka3
  sessionAffinity: None
  type: NodePort

2.問題解決

1.如果kafka已經連接上zoo,zoo重啓後,日誌中會不停的打印,同時kafka日誌也會不停的打印信息,說連不上,不是已經配置了多個嗎,好像不起作用?

2020-06-02 11:29:07,541 [myid:1] - INFO  [NIOWorkerThread-2:ZooKeeperServer@1375] - Refusing session request for client /10.100.15.190:52678 as it has seen zxid 0x10000002e our last zxid is 0x0 client must try another server
2020-06-02 11:29:09,925 [myid:1] - INFO  [NIOWorkerThread-1:ZooKeeperServer@1375] - Refusing session request for client /10.100.5.193:42016 as it has seen zxid 0x10000003f our last zxid is 0x0 client must try another server

2.客戶端連接報如下異常,如果topic是新的則報,什麼鬼?

bash-4.4# kafka-console-producer.sh --broker-list kafka1:9092 --topic zipkin>
[2020-06-02 12:09:34,009] WARN [Producer clientId=console-producer] 1 partitions have leader brokers without a matching listener, including [zipkin-0] (org.apache.kafka.clients.NetworkClient)
>[2020-06-02 12:09:34,108] WARN [Producer clientId=console-producer] 1 partitions have leader brokers without a matching listener, including [zipkin-0] (org.apache.kafka.clients.NetworkClient)

原因:kafka的boker.id要指定,如果變化,則找不到分區leader就報這個錯誤

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章