Hyperledger Fabric入門實戰(十)—— Kafka集羣部署

kafka集羣部署

1. 準備工作

名稱 IP地址 Hostname 組織結構
zk1 192.168.247.101 zookeeper1
zk2 192.168.247.102 zookeeper2
zk3 192.168.247.103 zookeeper3
kafka1 192.168.247.201 kafka1
kafka2 192.168.247.202 kafka2
kafka3 192.168.247.203 kafka3
kafka4 192.168.247.204 kafka4
orderer0 192.168.247.91 orderer0.test.com
orderer1 192.168.247.92 orderer1.test.com
orderer2 192.168.247.93 orderer2.test.com
peer0 192.168.247.81 peer0.orggo.test.com OrgGo
peer0 192.168.247.82 peer0.orgcpp.test.com OrgCpp

爲了保證整個集羣的正常工作, 需要給集羣中的各個節點設置工作目錄, 我們要保證各個節點工作目錄是相同的

# 在以上各個節點的家目錄創建工作目錄:
$ mkdir ~/kafka

2. 生成證書文件

2.1 編寫配置文件

# crypto-config.yaml
OrdererOrgs:
  - Name: Orderer
    Domain: test.com
    Specs:
      - Hostname: orderer0	# 第1個排序節點: orderer0.test.com
      - Hostname: orderer1	# 第2個排序節點: orderer1.test.com
      - Hostname: orderer2  # 第3個排序節點: orderer2.test.com

PeerOrgs:
  - Name: OrgGo
    Domain: orggo.test.com
    Template:
      Count: 2  # 當前go組織兩個peer節點
    Users:
      Count: 1

  - Name: OrgCpp
    Domain: orgcpp.test.com
    Template:
      Count: 2  # 當前cpp組織兩個peer節點
    Users:
      Count: 1

2.2 生成證書

$ cryptogen generate --config=crypto-config.yaml
$ tree ./ -L 1
./
├── `crypto-config`   -> 證書文件目錄
└── crypto-config.yaml

3. 生成創始塊和通道文件

3.1 編寫配置文件

配置文件名configtx.yaml這個名字是固定的, 不可修改的


---
################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:
    - &OrdererOrg
        Name: OrdererOrg
        ID: OrdererMSP
        MSPDir: crypto-config/ordererOrganizations/test.com/msp

    - &go_org
        Name: OrgGoMSP
        ID: OrgGoMSP
        MSPDir: crypto-config/peerOrganizations/orggo.test.com/msp
        AnchorPeers:
            - Host: peer0.orggo.test.com
              Port: 7051

    - &cpp_org
        Name: OrgCppMSP
        ID: OrgCppMSP
        MSPDir: crypto-config/peerOrganizations/orgcpp.test.com/msp
        AnchorPeers:
            - Host: peer0.orgcpp.test.com
              Port: 7051

################################################################################
#
#   SECTION: Capabilities
#
################################################################################
Capabilities:
    Global: &ChannelCapabilities
        V1_1: true
    Orderer: &OrdererCapabilities
        V1_1: true
    Application: &ApplicationCapabilities
        V1_2: true

################################################################################
#
#   SECTION: Application
#
################################################################################
Application: &ApplicationDefaults
    Organizations:

################################################################################
#
#   SECTION: Orderer
#
################################################################################
Orderer: &OrdererDefaults
    # Available types are "solo" and "kafka"
    OrdererType: kafka
    Addresses:
        # 排序節點服務器地址
        - orderer0.test.com:7050
        - orderer1.test.com:7050
        - orderer2.test.com:7050

    BatchTimeout: 2s
    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 99 MB
        PreferredMaxBytes: 512 KB
    Kafka:
        Brokers: 
            # kafka服務器地址
            - 192.168.247.201:9092
            - 192.168.247.202:9092
            - 192.168.247.203:9092
            - 192.168.247.204:9092
    Organizations:

################################################################################
#
#   Profile
#
################################################################################
Profiles:
    OrgsOrdererGenesis:
        Capabilities:
            <<: *ChannelCapabilities
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
            Capabilities:
                <<: *OrdererCapabilities
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *go_org
                    - *cpp_org
    OrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *go_org
                - *cpp_org
            Capabilities:
                <<: *ApplicationCapabilities

3.2 生成通道和創始塊文件

  • 生成創始塊文件

    # 我們先創建一個目錄 channel-artifacts 存儲生成的文件, 目的是爲了和後邊的配置文件模板的配置項保持一致
    $ mkdir channel-artifacts
    # 生成通道文件
    $ configtxgen -profile OrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
    
  • 生成通道文件

    # 生成創始塊文件
    $ configtxgen -profile OrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID testchannel
    

4. Zookeeper設置

4.1 基本概念

Zookeeper一種在分佈式系統中被廣泛用來作爲分佈式狀態管理、分佈式協調管理、分佈式配置管理和分佈式鎖服務的集羣。

  • zookeeper 的運作流程

    在配置之前, 讓我們先了解一下 Zookeeper 的基本運轉流程:

    • 選舉Leader
      • 選舉Leader過程中算法有很多,但要達到的選舉標準是一致的
      • Leader要具有最高的執行ID,類似root權限。
      • 集羣中大多數的機器得到響應並跟隨選出的Leader。
    • 數據同步
  • Zookeeper的集羣數量

    Zookeeper 集羣的數量可以是 3, 5, 7, 它值需要是一個奇數以避免腦裂問題(split-brain)的情況。同時選擇大於1的值是爲了避免單點故障,如果集羣的數量超過7個Zookeeper服務將會無法承受。

4.2 zookeeper配置文件模板

  • 配置文件模板

    下面我們來看一個示例配置文件, 研究下zookeeper如何配置:

    version: '2'
    services:
      zookeeper1: # 服務器名, 自己起
        container_name: zookeeper1 # 容器名, 自己起
        hostname: zookeeper1	# 訪問的主機名, 自己起, 需要和IP有對應關係
        image: hyperledger/fabric-zookeeper:latest
        restart: always	# 指定爲always
        environment:
          # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
          - ZOO_MY_ID=1
          # server.x=hostname:prot1:port2
          - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
        ports:
          - 2181:2181
          - 2888:2888
          - 3888:3888
        extra_hosts:
          - "zookeeper1:192.168.24.201"
          - "zookeeper2:192.168.24.202"
          - "zookeeper3:192.168.24.203"
          - "kafka1:192.168.24.204"
          - "kafka2:192.168.24.205"
          - "kafka3:192.168.24.206"
          - "kafka4:192.168.24.207"
    
  • 相關配置項解釋:

    1. docker 的restart策略

      • no – 容器退出時不要自動重啓,這個是默認值。
      • on-failure[:max-retries] – 只在容器以非0狀態碼退出時重啓, 例如:on-failure:10
      • always – 不管退出狀態碼是什麼始終重啓容器
      • unless-stopped – 不管退出狀態碼是什麼始終重啓容器,不過當daemon啓動時,如果容器之前已經爲停止狀態,不要嘗試啓動它。
    2. 環境變量

      • ZOO_MY_ID

        zookeeper集羣中的當前zookeeper服務器節點的ID, 在集羣中這個只是唯一的, 範圍: 1-255

      • ZOO_SERVERS

        • 組成zookeeper集羣的服務器列表
        • 列表中每個服務器的值都附帶兩個端口號
          • 第一個: 追隨者用來連接 Leader 使用的
          • 第二個: 用戶選舉 Leader
    3. zookeeper服務器中三個重要端口:

      • 訪問zookeeper的端口: 2181
      • zookeeper集羣中追隨者連接 Leader 的端口: 2888
      • zookeeper集羣中選舉 Leader 的端口: 3888
    4. extra_hosts

      • 設置服務器名和其指向的IP地址的對應關係
      • zookeeper1:192.168.24.201
        • 看到名字zookeeper1就會將其解析爲IP地址: 192.168.24.201

4.3 各個zookeeper節點的配置

zookeeper1 配置

# zookeeper1.yaml
version: '2'

services:

  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper:latest
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=1
      # server.x=[hostname]:nnnnn[:nnnnn]
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

zookeeper2 配置

# zookeeper2.yaml
version: '2'

services:

  zookeeper2:
    container_name: zookeeper2
    hostname: zookeeper2
    image: hyperledger/fabric-zookeeper:latest
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=2
      # server.x=[hostname]:nnnnn[:nnnnn]
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

zookeeper3 配置

# zookeeper3.yaml
version: '2'

services:

  zookeeper3:
    container_name: zookeeper3
    hostname: zookeeper3
    image: hyperledger/fabric-zookeeper:latest
    restart: always
    environment:
      # ID在集合中必須是唯一的並且應該有一個值,在1和255之間。
      - ZOO_MY_ID=3
      # server.x=[hostname]:nnnnn[:nnnnn]
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - 2181:2181
      - 2888:2888
      - 3888:3888
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

5. Kafka設置

5.1 基本概念

Katka是一個分佈式消息系統,由LinkedIn使用scala編寫,用作LinkedIn的活動流(Activitystream)和運營數據處理管道(Pipeline)的基礎。具有高水平擴展和高吞吐量。

在Fabric網絡中,數據是由Peer節點提交到Orderer排序服務,而Orderer相對於Kafka來說相當於上游模塊,且Orderer還兼具提供了對數據進行排序及生成符合配置規範及要求的區塊。而使用上游模塊的數據計算、統計、分析,這個時候就可以使用類似於Kafka這樣的分佈式消息系統來協助業務流程。

有人說Kafka是一種共識模式,也就是說平等信任,所有的HyperLedger Fabric網絡加盟方都是可信方,因爲消息總是均勻地分佈在各處。但具體生產使用的時候是依賴於背書來做到確權,相對而言,Kafka應該只能是一種啓動Fabric網絡的模式或類型。

Zookeeper一種在分佈式系統中被廣泛用來作爲分佈式狀態管理、分佈式協調管理、分佈式配置管理和分佈式鎖服務的集羣。Kafka增加和減少服務器都會在Zookeeper節點上觸發相應的事件,Kafka系統會捕獲這些事件,進行新一輪的負載均衡,客戶端也會捕獲這些事件來進行新一輪的處理。

Orderer排序服務是Fablic網絡事務流中的最重要的環節,也是所有請求的點,它並不會立刻對請求給予回饋,一是因爲生成區塊的條件所限,二是因爲依託下游集羣的消息處理需要等待結果。

5.2 kafka配置文件模板

  • kafka配置文件模板

    version: '2'
    
    services:
      kafka1: 
        container_name: kafka1
        hostname: kafka1
        image: hyperledger/fabric-kafka:latest
        restart: always
        environment:
          # broker.id
          - KAFKA_BROKER_ID=1
          - KAFKA_MIN_INSYNC_REPLICAS=2
          - KAFKA_DEFAULT_REPLICATION_FACTOR=3
          - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
          # 99 * 1024 * 1024 B
          - KAFKA_MESSAGE_MAX_BYTES=103809024 
          - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
          - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
          - KAFKA_LOG_RETENTION_MS=-1
          - KAFKA_HEAP_OPTS=-Xmx256M -Xms128M
        ports:
          - 9092:9092
        extra_hosts:
          - "zookeeper1:192.168.24.201"
          - "zookeeper2:192.168.24.202"
          - "zookeeper3:192.168.24.203"
          - "kafka1:192.168.24.204"
          - "kafka2:192.168.24.205"
          - "kafka3:192.168.24.206"
          - "kafka4:192.168.24.207"
    
  • 配置項解釋

    1. Kafka 默認端口爲: 9092
    2. 環境變量:
    • KAFKA_BROKER_ID
      • 是一個唯一的非負整數, 可以作爲代理Broker的名字
    • KAFKA_MIN_INSYNC_REPLICAS
      • 最小同步備份
      • 該值要小於環境變量 KAFKA_DEFAULT_REPLICATION_FACTOR的值
    • KAFKA_DEFAULT_REPLICATION_FACTOR
      • 默認同步備份, 該值要小於kafka集羣數量
    • KAFKA_ZOOKEEPER_CONNECT
      • 指向zookeeper節點的集合
    • KAFKA_MESSAGE_MAX_BYTES
      • 消息的最大字節數
      • 和配置文件configtx.yaml中的Orderer.BatchSize.AbsoluteMaxBytes對應
      • 由於消息都有頭信息, 所以這個值要比計算出的值稍大, 多加1M就足夠了
    • KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
      • 副本最大字節數, 試圖爲每個channel獲取的消息的字節數
      • AbsoluteMaxBytes<KAFKA_REPLICA_FETCH_MAX_BYTES <= KAFKA_MESSAGE_MAX_BYTES
    • KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      • 非一致性的 Leader 選舉
        • 開啓: true
        • 關閉: false
    • KAFKA_LOG_RETENTION_MS=-1
      • 對壓縮日誌保留的最長時間
      • 這個選項在Kafka中已經默認關閉
    • KAFKA_HEAP_OPTS
      • 設置堆內存大小, kafka默認爲 1G
        • -Xmx256M -> 允許分配的堆內存
        • -Xms128M -> 初始分配的堆內存

5.3 各個kafka節點的配置

kafka1 配置

# kafka1.yaml
version: '2'

services:

  kafka1:
    container_name: kafka1
    hostname: kafka1
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=1
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

kafka2 配置

# kafka2.yaml
version: '2'

services:

  kafka2:
    container_name: kafka2
    hostname: kafka2
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=2
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

kafka3 配置

# kafka3.yaml
version: '2'

services:

  kafka3:
    container_name: kafka3
    hostname: kafka3
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=3
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

kafka4 配置

# kafka4.yaml
version: '2'
services:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka:latest
    restart: always
    environment:
      # broker.id
      - KAFKA_BROKER_ID=4
      - KAFKA_MIN_INSYNC_REPLICAS=2
      - KAFKA_DEFAULT_REPLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      # 100 * 1024 * 1024 B
      - KAFKA_MESSAGE_MAX_BYTES=104857600 
      - KAFKA_REPLICA_FETCH_MAX_BYTES=104857600
      - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
      - KAFKA_HEAP_OPTS=-Xmx512M -Xms256M
    ports:
      - 9092:9092
    extra_hosts:
      - "zookeeper1:192.168.247.101"
      - "zookeeper2:192.168.247.102"
      - "zookeeper3:192.168.247.103"
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

6. orderer節點設置

6.1 orderer節點配置文件模板

  • orderer節點配置文件模板

    version: '2'
    
    services:
    
      orderer0.example.com:
        container_name: orderer0.example.com
        image: hyperledger/fabric-orderer:latest
        environment:
          - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
          - ORDERER_GENERAL_LOGLEVEL=debug
          - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
          - ORDERER_GENERAL_LISTENPORT=7050
          - ORDERER_GENERAL_GENESISMETHOD=file
          - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
          - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
          - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
          # enabled TLS
          - ORDERER_GENERAL_TLS_ENABLED=true
          - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
          - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
          - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
          
          - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
          - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
          - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
          - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
          - ORDERER_KAFKA_VERBOSE=true
          - ORDERER_KAFKA_BROKERS=[192.168.24.204:9092,192.168.24.205:9092,192.168.24.206:9092,192.168.24.207:9092]
        working_dir: /opt/gopath/src/github.com/hyperledger/fabric
        command: orderer
        volumes:
          - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
          - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
          - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
        networks:
          default:
            aliases:
              - aberic
        ports:
          - 7050:7050
        extra_hosts:
          - "kafka1:192.168.24.204"
          - "kafka2:192.168.24.205"
          - "kafka3:192.168.24.206"
          - "kafka4:192.168.24.207"
    
  • 細節解釋

    1. 環境變量
      • ORDERER_KAFKA_RETRY_LONGINTERVAL
        • 每隔多長時間進行一次重試, 單位:秒
      • ORDERER_KAFKA_RETRY_LONGTOTAL
        • 總共重試的時長, 單位: 秒
      • ORDERER_KAFKA_RETRY_SHORTINTERVAL
        • 每隔多長時間進行一次重試, 單位:秒
      • ORDERER_KAFKA_RETRY_SHORTTOTAL
        • 總共重試的時長, 單位: 秒
      • ORDERER_KAFKA_VERBOSE
        • 啓用日誌與kafka進行交互, 啓用: true, 不啓用: false
      • ORDERER_KAFKA_BROKERS
        • 指向kafka節點的集合
    2. 關於重試的時長
      • 先使用ORDERER_KAFKA_RETRY_SHORTINTERVAL進行重連, 重連的總時長爲ORDERER_KAFKA_RETRY_SHORTTOTAL
      • 如果上述步驟沒有重連成功, 使用ORDERER_KAFKA_RETRY_LONGINTERVAL進行重連, 重連的總時長爲ORDERER_KAFKA_RETRY_LONGTOTAL

6.3 orderer各節點的配置

orderer0配置

# orderer0.yaml
version: '2'

services:

  orderer0.test.com:
    container_name: orderer0.test.com
    image: hyperledger/fabric-orderer:latest
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=true
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
      
      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer0.test.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer0.test.com/tls/:/var/hyperledger/orderer/tls
    networks:
    default:
      aliases:
        - kafka
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

orderer1配置

# orderer1.yaml
version: '2'

services:

  orderer1.test.com:
    container_name: orderer1.test.com
    image: hyperledger/fabric-orderer:latest
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=true
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
      
      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer1.test.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer1.test.com/tls/:/var/hyperledger/orderer/tls
    networks:
    default:
      aliases:
        - kafka
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

orderer2配置

# orderer2.yaml
version: '2'

services:

  orderer2.test.com:
    container_name: orderer2.test.com
    image: hyperledger/fabric-orderer:latest
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=kafka_default
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=true
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
      
      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
      - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
      - ORDERER_KAFKA_VERBOSE=true
      - ORDERER_KAFKA_BROKERS=[192.168.247.201:9092,192.168.247.202:9092,192.168.247.203:9092,192.168.247.204:9092]
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
      - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer2.test.com/msp:/var/hyperledger/orderer/msp
      - ./crypto-config/ordererOrganizations/test.com/orderers/orderer2.test.com/tls/:/var/hyperledger/orderer/tls
    networks:
    default:
      aliases:
        - kafka
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.247.201"
      - "kafka2:192.168.247.202"
      - "kafka3:192.168.247.203"
      - "kafka4:192.168.247.204"

7. 啓動集羣

Kafka集羣的啓動順序是這樣的: 先啓動Zookeeper集羣, 隨後啓動Kafka集羣, 最後啓動Orderer排序服務器集羣。由於peer節點只能和集羣中orderer節點進行通信, 所以不管是使用solo集羣還是kafka集羣對peer都是沒有影響的, 所以當我們的kafka集羣順利啓動之後, 就可以啓動對應的Peer節點了。

7.1 啓動Zookeeper集羣

  • zookeeper1:192.168.247.101

    $ cd ~/kafka
    # 將寫好的 zookeeper1.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    # 該命令可以不加 -d 參數, 這樣就能看到當前 zookeeper 服務器啓動的情況了
    $ docker-compose -f zookeeper1.yaml up
    
  • zookeeper2:192.168.247.102

    $ cd ~/kafka
    # 將寫好的 zookeeper2.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    # 該命令可以不加 -d 參數, 這樣就能看到當前 zookeeper 服務器啓動的情況了
    $ docker-compose -f zookeeper2.yaml up
    
  • zookeeper3:192.168.247.103

    $ cd ~/kafka
    # 將寫好的 zookeeper3.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    # 該命令可以不加 -d 參數, 這樣就能看到當前 zookeeper 服務器啓動的情況了
    $ docker-compose -f zookeeper3.yaml up
    

7.2 啓動Kafka集羣

  • kafka1:192.168.247.201

    $ cd ~/kafka
    # 將寫好的 kafka1.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    # 該命令可以不加 -d 參數, 這樣就能看到當前 kafka 服務器啓動的情況了
    $ docker-compose -f kafka1.yaml up
    
  • kafka2:192.168.247.202

    $ cd ~/kafka
    # 將寫好的 kafka2.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    $ docker-compose -f kafka2.yaml up -d
    
  • kafka3:192.168.247.203

    $ cd ~/kafka
    # 將寫好的 kafka3.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    $ docker-compose -f kafka3.yaml up -d
    
  • kafka4:192.168.247.204

    $ cd ~/kafka
    # 將寫好的 kafka4.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    $ docker-compose -f kafka4.yaml up
    

7.3 啓動Orderer集羣

  • orderer0:192.168.247.91

    $ cd ~/kafka
    # 假設生成證書和通道創始塊文件操作是在當前 orderer0 上完成的, 那麼應該在當前 kafka 工作目錄下
    $ tree ./ -L 1
    ./
    ├── channel-artifacts
    ├── configtx.yaml
    ├── crypto-config
    └── crypto-config.yaml
    # 將寫好的 orderer0.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    $ docker-compose -f orderer0.yaml up -d
    
  • orderer1:192.168.247.92

    # 將生成的 證書文件目錄 和 通道創始塊 文件目錄拷貝到當前主機的 ~/kafka目錄中
    $ cd ~/kafka
    # 創建子目錄 crypto-config
    $ mkdir crypto-config
    # 遠程拷貝
    $ scp -f [email protected]:/home/itcast/kafka/crypto-config/ordererOrganizations ./crypto-config
    # # 將寫好的 orderer1.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    $ docker-compose -f orderer1.yaml up -d
    
  • orderer2:192.168.247.93

    # 將生成的 證書文件目錄 和 通道創始塊 文件目錄拷貝到當前主機的 ~/kafka目錄中
    $ cd ~/kafka
    # 創建子目錄 crypto-config
    $ mkdir crypto-config
    # 遠程拷貝
    $ scp -f [email protected]:/home/itcast/kafka/crypto-config/ordererOrganizations ./crypto-config
    # # 將寫好的 orderer3.yaml 配置文件放到該目錄下, 通過 docker-compose 啓動容器
    $ docker-compose -f orderer3.yaml up -d
    

7.4 啓動Peer集羣

關於 Peer 節點的部署和操作和 Solo 多機多節點部署的方式是完全一樣的, 在此不再闡述, 請翻閱相關文檔。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章