fabric多orderer節點環境搭建的詳細過程

## 引言

文章寫了塊兩年了,一直在公司內部分享,今天存放在這裏,也算做個記錄吧

本文配置結構爲3臺kafka,2個orderer,4個peer

其中kafka需要安裝jdk,

peer需要安裝go和docker

因爲實際環境需要,本文全部都是離線安裝,安裝詳見附件

系統  Ubuntu 16.04.3 

## hosts

將hosts複製到所有主機中 

```shell
vim /etc/hosts

192.168.3.181  node1  orderer1.local
192.168.3.183  node2  orderer2.local
192.168.3.185  node3  peer0.org1.local


192.168.3.184  kafka1 
192.168.3.186  kafka2
192.168.3.119  kafka3

# 分發到其他機器上
scp /etc/hosts [email protected]:/etc/
```

##kafka zookeeper集羣配置

kafka1、kafka2、kafka3

### 安裝jdk

```shell
mkdir /usr/local/software 
# 將安裝工具上傳到/usr/local/software/目錄下,爲了分發方便,我將所有安裝工具都放到這裏了,後續的安裝也會根據這個路徑來

# 分別將software分發到其他服務器上
scp -r /usr/local/software [email protected] /usr/local/software
# 解壓jdk文件到/usr/local/目錄下
tar zxf /usr/local/software/jdk-8u144-linux-x64.tar.gz -C /usr/local/
# 設置環境變量,這裏偷了個懶,將其他服務器所需的kafka和go都設置了,後續就不再說明了,避免後續遺漏,最好所有服務器都執行
echo "export JAVA_HOME=/usr/local/jdk1.8.0_144" >> ~/.profile
echo 'export JRE_HOME=${JAVA_HOME}/jre' >>  ~/.profile
echo 'export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib' >> ~/.profile
echo 'export PATH=${JAVA_HOME}/bin:$PATH' >> ~/.profile
echo 'export KAFKA_HOME=/usr/local/kafka_2.10-0.10.2.0' >> ~/.profile
echo 'export PATH=$KAFKA_HOME/bin:$PATH' >> ~/.profile
echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.profile
echo 'export GOPATH=/root/go' >> ~/.profile

# 使環境變量生效
source ~/.profile
```

### kafka zookeeper 安裝

```shell
# 解壓zookeeper文件到/usr/local/目錄下
tar zxf /usr/local/software/zookeeper-3.4.10.tar.gz -C /usr/local/
# 解壓kafka文件到/usr/local/目錄下
tar zxf /usr/local/software/kafka_2.10-0.10.2.0.tgz -C /usr/local/
# 修改server.properties配置文件,有就修改,沒有則添加
# 我已經將命令語句整理出,對應機器分別執行以下命令
```

kafka1 server.properties

```shell
sed -i 's/broker.id=0/broker.id=1/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka1:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties

#################以上命令修改和添加的變量################
## broker.id=1
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181

## listeners=PLAINTEXT://kafka1:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```

kafka2 server.properties

```shell
sed -i 's/broker.id=0/broker.id=2/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka2:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties

#################以上命令修改和添加的變量################
## broker.id=2
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181

## listeners=PLAINTEXT://kafka2:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```

kafka3 server.properties

```shell
sed -i 's/broker.id=0/broker.id=3/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/zookeeper.connect=localhost:2181
/zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'listeners=PLAINTEXT://kafka3:9092' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'unclean.leader.election.enable=false' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'min.insync.replicas=1' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties
echo 'default.replication.factor=2' >> /usr/local/kafka_2.10-0.10.2.0/config/server.properties

#################以上命令修改和添加的變量#################
## broker.id=2
## log.dirs=/data/kafka-logs
## zookeeper.connect=kafka1:2181,kafka2:2181,kafka3:2181

## listeners=PLAINTEXT://kafka2:9092
## unclean.leader.election.enable=false
## min.insync.replicas=1
## default.replication.factor=2
```

修改kafka裏zookeeper配置文件,新增myid

這裏同樣整理除了命令語句,對應機器分別執行即可

```shell
#創建kafka和zookeeper相關文件夾,只要和properties文件中的log.dirs一致就行
mkdir -p /data/kafka-logs
mkdir -p /data/zookeeper
```

zookeeper 1

```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 1 > /data/zookeeper/myid

#################以上命令修改和添加的變量#################
## zookeeper.properties

### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888

## 新增myid,並設置值爲1
```

zookeeper 2

```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 2 > /data/zookeeper/myid

#################以上命令修改和添加的變量#################
## zookeeper.properties

### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888

## 新增myid,並設置值爲2
```

zookeeper 3

```shell
sed -i 's/tmp/data/g' /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'initLimit=5' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'syncLimit=2' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.1=kafka1:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.2=kafka2:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 'server.3=kafka3:2888:3888' >>  /usr/local/kafka_2.10-0.10.2.0/config/zookeeper.properties
echo 3 > /data/zookeeper/myid

#################以上命令修改和添加的變量#################
## zookeeper.properties

### clientPort=2181
### initLimit=5
### syncLimit=2
### server.1=kafka1:2888:3888
### server.2=kafka2:2888:3888
### server.3=kafka3:2888:3888

## 新增myid,並設置值爲3
```

補充:server.properties可選參數如下

```shell
broker.id=0  #當前機器在集羣中的唯一標識,和zookeeper的myid性質一樣
port=19092 #當前kafka對外提供服務的端口默認是9092
host.name=192.168.7.100 #這個參數默認是關閉的,在0.8.1有個bug,DNS解析問題,失敗率的問題。
num.network.threads=3 #這個是borker進行網絡處理的線程數
num.io.threads=8 #這個是borker進行I/O處理的線程數
log.dirs=/opt/kafka/kafkalogs/ #消息存放的目錄,這個目錄可以配置爲“,”逗號分割的表達式,上面的num.io.threads要大於這個目錄的個數這個目錄,如果配置多個目錄,新創建的topic他把消息持久化的地方是,當前以逗號分割的目錄中,那個分區數最少就放那一個
socket.send.buffer.bytes=102400 #發送緩衝區buffer大小,數據不是一下子就發送的,先回存儲到緩衝區了到達一定的大小後在發送,能提高性能
socket.receive.buffer.bytes=102400 #kafka接收緩衝區大小,當數據到達一定大小後在序列化到磁盤
socket.request.max.bytes=104857600 #這個參數是向kafka請求消息或者向kafka發送消息的請請求的最大數,這個值不能超過java的堆棧大小
num.partitions=1 #默認的分區數,一個topic默認1個分區數
log.retention.hours=168 #默認消息的最大持久化時間,168小時,7天
message.max.byte=5242880  #消息保存的最大值5M
default.replication.factor=2  #kafka保存消息的副本數,如果一個副本失效了,另一個還可以繼續提供服務
replica.fetch.max.bytes=5242880  #取消息的最大直接數
log.segment.bytes=1073741824 #這個參數是:因爲kafka的消息是以追加的形式落地到文件,當超過這個值的時候,kafka會新起一個文件
log.retention.check.interval.ms=300000 #每隔300000毫秒去檢查上面配置的log失效時間(log.retention.hours=168 ),到目錄查看是否有過期的消息如果有,刪除
log.cleaner.enable=false #是否啓用log壓縮,一般不用啓用,啓用的話可以提高性能
zookeeper.connect=192.168.7.100:12181,192.168.7.101:12181,192.168.7.107:1218 #設置zookeeper的連接端口
```

啓動

```shell
# 啓動zookeeper
nohup zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties >> ~/zookeeper.log 2>&1 &
# 啓動kafka
nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties  >> ~/kafka.log 2>&1 &

jps
# 查看kafka和zookeeper進程是否啓動起來
1462 Jps
1193 Kafka
937 QuorumPeerMain

# kafka測試:
#(1)一臺機器上創建主題和生產者:
$KAFKA_HOME/bin/./kafka-topics.sh --create --zookeeper kafka1:2181 --replication-factor 2 --partition 1 --topic test

$KAFKA_HOME/bin/./kafka-console-producer.sh --broker-list kafka1:9092 --topic test

#(2)另外2臺接收消息
$KAFKA_HOME/bin/./kafka-console-consumer.sh --zookeeper kafka1:2181 --topic test --from beginning
```

kafka部署錯誤

```shell
# 查看kafka server.properties 中的listener是否已設置
# 查看log.dirs目錄是否存在,是否擁有讀寫權限
# 查看myid文件初始值是否寫入
# 查看內存是否夠用

# kafka和zookeeper其他相關命令

# 查看所有主題
$KAFKA_HOME/bin/kafka-topics.sh --zookeeper kafka1:2181 --list
# 查看單個主題(test)詳情
$KAFKA_HOME/bin/kafka-topics.sh --describe --zookeeper kafka1:2181 --topic test

# 關閉kafka 
kafka-server-stop.sh
# 關閉zookeeper
zookeeper-server-stop.sh
```

node3、node4、node5、node6

##安裝GO

```shell
# 解壓go相關軟件
tar zxf /usr/local/software/go1.9.linux-amd64.tar.gz -C /usr/local/
```

##安裝Docker

```shell
# 安裝deb版Docker,這裏爲ubunt deb安裝文件
dpkg -i /usr/local/software/*.deb

# 啓動 docker
service docker start
# 查看安裝版本號  
docker version

Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:06 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:06 2017
 OS/Arch:      linux/amd64
 Experimental: false
 
# load 鏡像
docker load -i /usr/local/software/fabric-javaenv.tar.gz
docker load -i /usr/local/software/fabric-ccenv.tar.gz
docker load -i /usr/local/software/fabric-baseimage.tar.gz
docker load -i /usr/local/software/fabric-baseos.tar.gz
```

## fabric配置

### 配置文件說明

####orderer.yaml

    orderer啓動讀取的配置文件

```yaml
---
 ################################################################################
 #
 #   Orderer Configuration
 #
 #   - This controls the type and configuration of the orderer.
 #
 ################################################################################
General:

     # Ledger Type: The ledger type to provide to the orderer.
     # Two non-production ledger types are provided for test purposes only:
     #  - ram: An in-memory ledger whose contents are lost on restart.
     #  - json: A simple file ledger that writes blocks to disk in JSON format.
     # Only one production ledger type is provided:
     #  - file: A production file-based ledger.
    LedgerType: file

     # Listen address: The IP on which to bind to listen.
    ListenAddress: 127.0.0.1

     # Listen port: The port on which to bind to listen.
    ListenPort: 7050

     # TLS: TLS settings for the GRPC server.
    TLS:
        Enabled: false
        PrivateKey: tls/server.key
        Certificate: tls/server.crt
        RootCAs:
          - tls/ca.crt
        ClientAuthEnabled: false
        ClientRootCAs:

     # Log Level: The level at which to log. This accepts logging specifications
     # per: fabric/docs/Setup/logging-control.md
    LogLevel: info

     # Genesis method: The method by which the genesis block for the orderer
     # system channel is specified. Available options are "provisional", "file":
     #  - provisional: Utilizes a genesis profile, specified by GenesisProfile,
     #                 to dynamically generate a new genesis block.
     #  - file: Uses the file provided by GenesisFile as the genesis block.
    GenesisMethod: provisional

     # Genesis profile: The profile to use to dynamically generate the genesis
     # block to use when initializing the orderer system channel and
     # GenesisMethod is set to "provisional". See the configtx.yaml file for the
     # descriptions of the available profiles. Ignored if GenesisMethod is set to
     # "file".
    GenesisProfile: SampleInsecureSolo

     # Genesis file: The file containing the genesis block to use when
     # initializing the orderer system channel and GenesisMethod is set to
     # "file". Ignored if GenesisMethod is set to "provisional".
    GenesisFile: genesisblock

     # LocalMSPDir is where to find the private crypto material needed by the
     # orderer. It is set relative here as a default for dev environments but
     # should be changed to the real location in production.
    LocalMSPDir: msp

     # LocalMSPID is the identity to register the local MSP material with the MSP
     # manager. IMPORTANT: The local MSP ID of an orderer needs to match the MSP
     # ID of one of the organizations defined in the orderer system channel's
     # /Channel/Orderer configuration. The sample organization defined in the
     # sample configuration provided has an MSP ID of "DEFAULT".
    LocalMSPID: DEFAULT

     # Enable an HTTP service for Go "pprof" profiling as documented at:
     # https://golang.org/pkg/net/http/pprof
    Profile:
        Enabled: false
        Address: 0.0.0.0:6060

     # BCCSP configures the blockchain crypto service providers.
    BCCSP:
         # Default specifies the preferred blockchain crypto service provider
         # to use. If the preferred provider is not available, the software
         # based provider ("SW") will be used.
         # Valid providers are:
         #  - SW: a software based crypto provider
         #  - PKCS11: a CA hardware security module crypto provider.
        Default: SW

         # SW configures the software based blockchain crypto provider.
        SW:
             # TODO: The default Hash and Security level needs refactoring to be
             # fully configurable. Changing these defaults requires coordination
             # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
             # Location of key store. If this is unset, a location will be
             # chosen using: 'LocalMSPDir'/keystore
            FileKeyStore:
                KeyStore:

 ################################################################################
 #
 #   SECTION: File Ledger
 #
 #   - This section applies to the configuration of the file or json ledgers.
 #
 ################################################################################
FileLedger:

     # Location: The directory to store the blocks in.
     # NOTE: If this is unset, a new temporary location will be chosen every time
     # the orderer is restarted, using the prefix specified by Prefix.
    Location: /var/hyperledger/production/orderer

     # The prefix to use when generating a ledger directory in temporary space.
     # Otherwise, this value is ignored.
    Prefix: hyperledger-fabric-ordererledger

 ################################################################################
 #
 #   SECTION: RAM Ledger
 #
 #   - This section applies to the configuration of the RAM ledger.
 #
 ################################################################################
RAMLedger:

     # History Size: The number of blocks that the RAM ledger is set to retain.
     # WARNING: Appending a block to the ledger might cause the oldest block in
     # the ledger to be dropped in order to limit the number total number blocks
     # to HistorySize. For example, if history size is 10, when appending block
     # 10, block 0 (the genesis block!) will be dropped to make room for block 10.
    HistorySize: 1000

 ################################################################################
 #
 #   SECTION: Kafka
 #
 #   - This section applies to the configuration of the Kafka-based orderer, and
 #     its interaction with the Kafka cluster.
 #
 ################################################################################
Kafka:

     # Retry: What do if a connection to the Kafka cluster cannot be established,
     # or if a metadata request to the Kafka cluster needs to be repeated.
    Retry:
         # When a new channel is created, or when an existing channel is reloaded
         # (in case of a just-restarted orderer), the orderer interacts with the
         # Kafka cluster in the following ways:
         # 1. It creates a Kafka producer (writer) for the Kafka partition that
         # corresponds to the channel.
         # 2. It uses that producer to post a no-op CONNECT message to that
         # partition
         # 3. It creates a Kafka consumer (reader) for that partition.
         # If any of these steps fail, they will be re-attempted every
         # <ShortInterval> for a total of <ShortTotal>, and then every
         # <LongInterval> for a total of <LongTotal> until they succeed.
         # Note that the orderer will be unable to write to or read from a
         # channel until all of the steps above have been completed successfully.
        ShortInterval: 5s
        ShortTotal: 10m
        LongInterval: 5m
        LongTotal: 12h
         # Affects the socket timeouts when waiting for an initial connection, a
         # response, or a transmission. See Config.Net for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        NetworkTimeouts:
            DialTimeout: 10s
            ReadTimeout: 10s
            WriteTimeout: 10s
         # Affects the metadata requests when the Kafka cluster is in the middle
         # of a leader election.See Config.Metadata for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        Metadata:
            RetryBackoff: 250ms
            RetryMax: 3
         # What to do if posting a message to the Kafka cluster fails. See
         # Config.Producer for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        Producer:
            RetryBackoff: 100ms
            RetryMax: 3
        # What to do if reading from the Kafka cluster fails. See
         # Config.Consumer for more info:
         # https://godoc.org/github.com/Shopify/sarama#Config
        Consumer:
            RetryBackoff: 2s

     # Verbose: Enable logging for interactions with the Kafka cluster.
    Verbose: false

     # TLS: TLS settings for the orderer's connection to the Kafka cluster.
    TLS:

       # Enabled: Use TLS when connecting to the Kafka cluster.
      Enabled: false

       # PrivateKey: PEM-encoded private key the orderer will use for
       # authentication.
      PrivateKey:
         # As an alternative to specifying the PrivateKey here, uncomment the
         # following "File" key and specify the file name from which to load the
         # value of PrivateKey.
         #File: path/to/PrivateKey

       # Certificate: PEM-encoded signed public key certificate the orderer will
       # use for authentication.
      Certificate:
         # As an alternative to specifying the Certificate here, uncomment the
         # following "File" key and specify the file name from which to load the
         # value of Certificate.
         #File: path/to/Certificate

       # RootCAs: PEM-encoded trusted root certificates used to validate
       # certificates from the Kafka cluster.
      RootCAs:
         # As an alternative to specifying the RootCAs here, uncomment the
         # following "File" key and specify the file name from which to load the
         # value of RootCAs.
         #File: path/to/RootCAs

     # Kafka version of the Kafka cluster brokers (defaults to 0.9.0.1)
    Version:
```

#### crypto-config.yaml

    生成網絡拓撲和證書

    文件可以幫我們爲每個組織和組織中的成員生成證書庫。每個組織分配一個根證書(ca-cert),這個根證書會綁定一些peers和orders到這個組織。fabric中的交易和通信都會被一個參與者的私鑰(keystore)簽名,並會被公鑰(signcerts)驗證。最後Users Count=1是說每個Template下面會有幾個普通User(注意,Admin是Admin,不包含在這個計數中),這裏配置了1,也就是說我們只需要一個普通用戶[email protected] 我們可以根據實際需要調整這個配置文件,增刪Org Users等

```yaml
# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
  # ---------------------------------------------------------------------------
  # Orderer
  # ---------------------------------------------------------------------------
  - Name: Orderer
    Domain: orderer.local
    # ---------------------------------------------------------------------------
    # "Specs" - See PeerOrgs below for complete description
    # ---------------------------------------------------------------------------
    Specs:
      - Hostname: orderer1.local
        CommonName: orderer1.local
      - Hostname: orderer2.local
        CommonName: orderer2.local
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
  # ---------------------------------------------------------------------------
  # Org1
  # ---------------------------------------------------------------------------
  - Name: Org1
    Domain: org1.local
    # ---------------------------------------------------------------------------
    # "Specs"
    # ---------------------------------------------------------------------------
    # Uncomment this section to enable the explicit definition of hosts in your
    # configuration.  Most users will want to use Template, below
    #
    # Specs is an array of Spec entries.  Each Spec entry consists of two fields:
    #   - Hostname:   (Required) The desired hostname, sans the domain.
    #   - CommonName: (Optional) Specifies the template or explicit override for
    #                 the CN.  By default, this is the template:
    #
    #                              "{{.Hostname}}.{{.Domain}}"
    #
    #                 which obtains its values from the Spec.Hostname and
    #                 Org.Domain, respectively.
    # ---------------------------------------------------------------------------
    # Specs:
    #   - Hostname: foo # implicitly "foo.org1.example.com"
    #     CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
    #   - Hostname: bar
    #   - Hostname: baz
    Specs:
      - Hostname: peer0.org1.local
        CommonName: peer0.org1.local
    # ---------------------------------------------------------------------------
    # "Template"
    # ---------------------------------------------------------------------------
    # Allows for the definition of 1 or more hosts that are created sequentially
    # from a template. By default, this looks like "peer%d" from 0 to Count-1.
    # You may override the number of nodes (Count), the starting index (Start)
    # or the template used to construct the name (Hostname).
    #
    # Note: Template and Specs are not mutually exclusive.  You may define both
    # sections and the aggregate nodes will be created for you.  Take care with
    # name collisions
    # ---------------------------------------------------------------------------
    Template:
      Count: 2
      # Start: 5
      # Hostname: {{.Prefix}}{{.Index}} # default
    # ---------------------------------------------------------------------------
    # "Users"
    # ---------------------------------------------------------------------------
    # Count: The number of user accounts _in addition_ to Admin
    # ---------------------------------------------------------------------------
    Users:
      Count: 1
  # ---------------------------------------------------------------------------
  # Org2: See "Org1" for full specification
  # ---------------------------------------------------------------------------
  - Name: Org2
    Domain: org2.local
    Specs:
      - Hostname: peer0.org2.local
        CommonName: peer0.org2.local
    Template:
      Count: 2
    Users:
      Count: 1
```

core.yaml peer啓動讀取的配置文件

```yaml
###############################################################################
 #
 #    LOGGING section
 #
 ###############################################################################
logging:

     # Default logging levels are specified here.

     # Valid logging levels are case-insensitive strings chosen from

     #     CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG

     # The overall default logging level can be specified in various ways,
     # listed below from strongest to weakest:
     #
     # 1. The --logging-level=<level> command line option overrides all other
     #    default specifications.
     #
     # 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to
     #    all peer commands if defined as a non-empty string.
     #
     # 3. The value of peer that directly follows in this file. It can also
     #    be set via the environment variable CORE_LOGGING_PEER.
     #
     # If no overall default level is provided via any of the above methods,
     # the peer will default to INFO (the value of defaultLevel in
     # common/flogging/logging.go)

     # Default for all modules running within the scope of a peer.
     # Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL
     #       are not set
    peer:       info

     # The overall default values mentioned above can be overridden for the
     # specific components listed in the override section below.

     # Override levels for various peer modules. These levels will be
     # applied once the peer has completely started. They are applied at this
     # time in order to be sure every logger has been registered with the
     # logging package.
     # Note: the modules listed below are the only acceptable modules at this
     #       time.
    cauthdsl:   warning
    gossip:     warning
    ledger:     info
    msp:        warning
    policies:   warning
    grpc:       error

     # Message format for the peer logs
    format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

 ###############################################################################
 #
 #    Peer section
 #
 ###############################################################################
peer:

     # The Peer id is used for identifying this Peer instance.
    id: jdoe

     # The networkId allows for logical seperation of networks
    networkId: dev

     # The Address at local network interface this Peer will listen on.
     # By default, it will listen on all network interfaces
    listenAddress: 0.0.0.0:7051

     # The endpoint this peer uses to listen for inbound chaincode connections.
     #
     # The chaincode connection does not support TLS-mutual auth. Having a
     # separate listener for the chaincode helps isolate the chaincode
     # environment for enhanced security, so it is strongly recommended to
     # uncomment chaincodeListenAddress and specify a protected endpoint.
     #
     # If chaincodeListenAddress is not configured or equals to the listenAddress,
     # listenAddress will be used for chaincode connections. This is not
     # recommended for production.
     #
     # chaincodeListenAddress: 127.0.0.1:7052

     # When used as peer config, this represents the endpoint to other peers
     # in the same organization for peers in other organization, see
     # gossip.externalEndpoint for more info.
     # When used as CLI config, this means the peer's endpoint to interact with
    address: 0.0.0.0:7051

     # Whether the Peer should programmatically determine its address
     # This case is useful for docker containers.
    addressAutoDetect: false

     # Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the
     # current setting
    gomaxprocs: -1

     # Gossip related configuration
    gossip:
         # Bootstrap set to initialize gossip with.
         # This is a list of other peers that this peer reaches out to at startup.
         # Important: The endpoints here have to be endpoints of peers in the same
         # organization, because the peer would refuse connecting to these endpoints
         # unless they are in the same organization as the peer.
        bootstrap: 127.0.0.1:7051

         # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
         # Setting both to true would result in the termination of the peer
         # since this is undefined state. If the peers are configured with
         # useLeaderElection=false, make sure there is at least 1 peer in the
         # organization that its orgLeader is set to true.

         # Defines whenever peer will initialize dynamic algorithm for
         # "leader" selection, where leader is the peer to establish
         # connection with ordering service and use delivery protocol
         # to pull ledger blocks from ordering service. It is recommended to
         # use leader election for large networks of peers.
        useLeaderElection: false
         # Statically defines peer to be an organization "leader",
         # where this means that current peer will maintain connection
         # with ordering service and disseminate block across peers in
         # its own organization
        orgLeader: true

         # Overrides the endpoint that the peer publishes to peers
         # in its organization. For peers in foreign organizations
         # see 'externalEndpoint'
        endpoint:
         # Maximum count of blocks stored in memory
        maxBlockCountToStore: 100
         # Max time between consecutive message pushes(unit: millisecond)
        maxPropagationBurstLatency: 10ms
         # Max number of messages stored until a push is triggered to remote peers
        maxPropagationBurstSize: 10
         # Number of times a message is pushed to remote peers
        propagateIterations: 1
         # Number of peers selected to push messages to
        propagatePeerNum: 3
         # Determines frequency of pull phases(unit: second)
        pullInterval: 4s
         # Number of peers to pull from
        pullPeerNum: 3
         # Determines frequency of pulling state info messages from peers(unit: second)
        requestStateInfoInterval: 4s
         # Determines frequency of pushing state info messages to peers(unit: second)
        publishStateInfoInterval: 4s
         # Maximum time a stateInfo message is kept until expired
        stateInfoRetentionInterval:
         # Time from startup certificates are included in Alive messages(unit: second)
        publishCertPeriod: 10s
         # Should we skip verifying block messages or not (currently not in use)
        skipBlockVerification: false
         # Dial timeout(unit: second)
        dialTimeout: 3s
         # Connection timeout(unit: second)
        connTimeout: 2s
         # Buffer size of received messages
        recvBuffSize: 20
         # Buffer size of sending messages
        sendBuffSize: 200
         # Time to wait before pull engine processes incoming digests (unit: second)
        digestWaitTime: 1s
         # Time to wait before pull engine removes incoming nonce (unit: second)
        requestWaitTime: 1s
         # Time to wait before pull engine ends pull (unit: second)
        responseWaitTime: 2s
         # Alive check interval(unit: second)
        aliveTimeInterval: 5s
         # Alive expiration timeout(unit: second)
        aliveExpirationTimeout: 25s
         # Reconnect interval(unit: second)
        reconnectInterval: 25s
         # This is an endpoint that is published to peers outside of the organization.
         # If this isn't set, the peer will not be known to other organizations.
        externalEndpoint:
         # Leader election service configuration
        election:
             # Longest time peer waits for stable membership during leader election startup (unit: second)
            startupGracePeriod: 15s
             # Interval gossip membership samples to check its stability (unit: second)
            membershipSampleInterval: 1s
             # Time passes since last declaration message before peer decides to perform leader election (unit: second)
            leaderAliveThreshold: 10s
             # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
            leaderElectionDuration: 5s

     # EventHub related configuration
    events:
         # The address that the Event service will be enabled on the peer
        address: 0.0.0.0:7053

         # total number of events that could be buffered without blocking send
        buffersize: 100

         # timeout duration for producer to send an event.
         # if < 0, if buffer full, unblocks immediately and not send
         # if 0, if buffer full, will block and guarantee the event will be sent out
         # if > 0, if buffer full, blocks till timeout
        timeout: 10ms

     # TLS Settings
     # Note that peer-chaincode connections through chaincodeListenAddress is
     # not mutual TLS auth. See comments on chaincodeListenAddress for more info
    tls:
        enabled:  false
        cert:
            file: tls/server.crt
        key:
            file: tls/server.key
        rootcert:
            file: tls/ca.crt

         # The server name use to verify the hostname returned by TLS handshake
        serverhostoverride:

     # Path on the file system where peer will store data (eg ledger). This
     # location must be access control protected to prevent unintended
     # modification that might corrupt the peer operations.
    fileSystemPath: /var/hyperledger/production

     # BCCSP (Blockchain crypto provider): Select which crypto implementation or
     # library to use
    BCCSP:
        Default: SW
        SW:
             # TODO: The default Hash and Security level needs refactoring to be
             # fully configurable. Changing these defaults requires coordination
             # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
             # Location of Key Store
            FileKeyStore:
                 # If "", defaults to 'mspConfigPath'/keystore
                 # TODO: Ensure this is read with fabric/core/config.GetPath() once ready
                KeyStore:

     # Path on the file system where peer will find MSP local configurations
    mspConfigPath: msp

     # Identifier of the local MSP
     # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
     # Deployers need to change the value of the localMspId string.
     # In particular, the name of the local MSP ID of a peer needs
     # to match the name of one of the MSPs in each of the channel
     # that this peer is a member of. Otherwise this peer's messages
     # will not be identified as valid by other nodes.
    localMspId: DEFAULT

     # Used with Go profiling tools only in none production environment. In
     # production, it should be disabled (eg enabled: false)
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060

 ###############################################################################
 #
 #    VM section
 #
 ###############################################################################
vm:

     # Endpoint of the vm management system.  For docker can be one of the following in general
     # unix:///var/run/docker.sock
     # http://localhost:2375
     # https://localhost:2376
    endpoint: unix:///var/run/docker.sock

     # settings for docker vms
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key

         # Enables/disables the standard out/err from chaincode containers for
         # debugging purposes
        attachStdout: false

         # Parameters on creating docker container.
         # Container may be efficiently created using ipam & dns-server for cluster
         # NetworkMode - sets the networking mode for the container. Supported
         # standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
         # Dns - a list of DNS servers for the container to use.
         # Note:  `Privileged` `Binds` `Links` and `PortBindings` properties of
         # Docker Host Config are not supported and will not be used if set.
         # LogConfig - sets the logging driver (Type) and related options
         # (Config) for Docker. For more info,
         # https://docs.docker.com/engine/admin/logging/overview/
         # Note: Set LogConfig using Environment Variables is not supported.
        hostConfig:
            NetworkMode: host
            Dns:
                # - 192.168.0.1
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648

 ###############################################################################
 #
 #    Chaincode section
 #
 ###############################################################################
chaincode:
     # This is used if chaincode endpoint resolution fails with the
     # chaincodeListenAddress property
    peerAddress:

     # The id is used by the Chaincode stub to register the executing Chaincode
     # ID with the Peer and is generally supplied through ENV variables
     # the `path` form of ID is provided when installing the chaincode.
     # The `name` is used for all other requests and can be any string.
    id:
        path:
        name:

     # Generic builder environment, suitable for most chaincode types
    builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)

    golang:
         # golang will never need more than baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    car:
         # car may need more facilities (JVM, etc) in the future as the catalog
         # of platforms are expanded.  For now, we can just use baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    java:
         # This is an image based on java:openjdk-8 with addition compiler
         # tools added for java shim layer packaging.
         # This image is packed with shim layer libraries that are necessary
         # for Java chaincode runtime.
        Dockerfile:  |
            from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)

     # Timeout duration for starting up a container and waiting for Register
     # to come through. 1sec should be plenty for chaincode unit tests
    startuptimeout: 300s

     # Timeout duration for Invoke and Init calls to prevent runaway.
     # This timeout is used by all chaincodes in all the channels, including
     # system chaincodes.
     # Note that during Invoke, if the image is not available (e.g. being
     # cleaned up when in development environment), the peer will automatically
     # build the image, which might take more time. In production environment,
     # the chaincode image is unlikely to be deleted, so the timeout could be
     # reduced accordingly.
    executetimeout: 30s

     # There are 2 modes: "dev" and "net".
     # In dev mode, user runs the chaincode after starting peer from
     # command line on local machine.
     # In net mode, peer will run chaincode in a docker container.
    mode: net

     # keepalive in seconds. In situations where the communiction goes through a
     # proxy that does not support keep-alive, this parameter will maintain connection
     # between peer and chaincode.
     # A value <= 0 turns keepalive off
    keepalive: 0

     # system chaincodes whitelist. To add system chaincode "myscc" to the
     # whitelist, add "myscc: enable" to the list below, and register in
     # chaincode/importsysccs.go
    system:
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable

     # Logging section for the chaincode container
    logging:
       # Default level for all loggers within the chaincode container
      level:  info
       # Override default level for the 'shim' module
      shim:   warning
       # Format for the chaincode container logs
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

 ###############################################################################
 #
 #    Ledger section - ledger configuration encompases both the blockchain
 #    and the state
 #
 ###############################################################################
ledger:

  blockchain:

  state:
     # stateDatabase - options are "goleveldb", "CouchDB"
     # goleveldb - default state database stored in goleveldb.
     # CouchDB - store state database in CouchDB
    stateDatabase: goleveldb
    couchDBConfig:
        # It is recommended to run CouchDB on the same server as the peer, and
        # not map the CouchDB container port to a server port in docker-compose.
        # Otherwise proper security must be provided on the connection between
        # CouchDB client (on the peer) and server.
       couchDBAddress: 127.0.0.1:5984
        # This username must have read and write authority on CouchDB
       username:
        # The password is recommended to pass as an environment variable
        # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
        # If it is stored here, the file must be access control protected
        # to prevent unintended users from discovering the password.
       password:
        # Number of retries for CouchDB errors
       maxRetries: 3
        # Number of retries for CouchDB errors during peer startup
       maxRetriesOnStartup: 10
        # CouchDB request timeout (unit: duration, e.g. 20s)
       requestTimeout: 35s
        # Limit on the number of records to return per query
       queryLimit: 10000


  history:
     # enableHistoryDatabase - options are true or false
     # Indicates if the history of key updates should be stored.
     # All history 'index' will be stored in goleveldb, regardless if using
     # CouchDB or alternate database for the state.
enableHistoryDatabase: true
```

####configtx.yaml 

    生成網絡拓撲和證書

    文件包含網絡的定義,網絡中有2個成員(Org1和Org2)分別管理維護2個peer。 在文件最上方 “Profile”段落中,有兩個header,一個是orderer genesis block - TwoOrgsOrdererGenesis ,另一個是channel - TwoOrgsChannel。這兩個header十分重要,我們創建artifacts是我們會把他們作爲參數傳入。文件中還包含每個成員的MSP 目錄位置.crypto目錄包含每個實體的admin證書、ca證書、簽名證書和私鑰

```yaml
---
################################################################################
#
#   Profile
#
#   - Different configuration profiles may be encoded here to be specified
#   as parameters to the configtxgen tool
#
################################################################################
Profiles:

 TwoOrgsOrdererGenesis:
     Orderer:
         <<: *OrdererDefaults
         Organizations:
             - *OrdererOrg
     Consortiums:
         SampleConsortium:
             Organizations:
                 - *Org1
                 - *Org2
 TwoOrgsChannel:
     Consortium: SampleConsortium
     Application:
         <<: *ApplicationDefaults
         Organizations:
             - *Org1
             - *Org2

################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:

 # SampleOrg defines an MSP using the sampleconfig.  It should never be used
 # in production but may be used as a template for other definitions
 - &OrdererOrg
     # DefaultOrg defines the organization which is used in the sampleconfig
     # of the fabric.git development environment
     Name: OrdererOrg

     # ID to load the MSP definition as
     ID: OrdererMSP

     # MSPDir is the filesystem path which contains the MSP configuration
     MSPDir: crypto-config/ordererOrganizations/orderer.local/msp


 - &Org1
     # DefaultOrg defines the organization which is used in the sampleconfig
     # of the fabric.git development environment
     Name: Org1MSP

     # ID to load the MSP definition as
     ID: Org1MSP

     MSPDir: crypto-config/peerOrganizations/org1.local/msp

     AnchorPeers:
         # AnchorPeers defines the location of peers which can be used
         # for cross org gossip communication.  Note, this value is only
         # encoded in the genesis block in the Application section context
#         - Host: peer0.org1.local
#           Port: 7051
#         - Host: peer1.org1.local
#           Port: 7051

 - &Org2
     # DefaultOrg defines the organization which is used in the sampleconfig
     # of the fabric.git development environment
     Name: Org2MSP

     # ID to load the MSP definition as
     ID: Org2MSP

     MSPDir: crypto-config/peerOrganizations/org2.local/msp

     AnchorPeers:
         # AnchorPeers defines the location of peers which can be used
         # for cross org gossip communication.  Note, this value is only
         # encoded in the genesis block in the Application section context
#         - Host: peer0.org2.local
#           Port: 7051
#         - Host: peer1.org2.local
#           Port: 7051
################################################################################
#
#   SECTION: Orderer
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults

 # Orderer Type: The orderer implementation to start
 # Available types are "solo" and "kafka"
 OrdererType: kafka

 Addresses:
     - orderer1.local:7050
     - orderer2.local:7050
     #- orderer3.local:7050
 # Batch Timeout: The amount of time to wait before creating a batch
 BatchTimeout: 2s

 # Batch Size: Controls the number of messages batched into a block
 BatchSize:

     # Max Message Count: The maximum number of messages to permit in a batch
     MaxMessageCount: 10

     # Absolute Max Bytes: The absolute maximum number of bytes allowed for
     # the serialized messages in a batch.
     AbsoluteMaxBytes: 99 MB

     # Preferred Max Bytes: The preferred maximum number of bytes allowed for
     # the serialized messages in a batch. A message larger than the preferred
     # max bytes will result in a batch larger than preferred max bytes.
     PreferredMaxBytes: 512 KB

 Kafka:
     # Brokers: A list of Kafka brokers to which the orderer connects
     # NOTE: Use IP:port notation
     Brokers:
         - kafka1:9092
         - kafka2:9092
         - kafka3:9092

 # Organizations is the list of orgs which are defined as participants on
 # the orderer side of the network
 Organizations:

################################################################################
#
#   SECTION: Application
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults

    # Organizations is the list of orgs which are defined as participants on
    # the application side of the network
    Organizations:
```

####core.yaml 

    peer啓動讀取的配置文件

```yaml
###############################################################################
#
#    LOGGING section
#
###############################################################################
logging:

    # Default logging levels are specified here.

    # Valid logging levels are case-insensitive strings chosen from

    #     CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG

    # The overall default logging level can be specified in various ways,
    # listed below from strongest to weakest:
    #
    # 1. The --logging-level=<level> command line option overrides all other
    #    default specifications.
    #
    # 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to
    #    all peer commands if defined as a non-empty string.
    #
    # 3. The value of peer that directly follows in this file. It can also
    #    be set via the environment variable CORE_LOGGING_PEER.
    #
    # If no overall default level is provided via any of the above methods,
    # the peer will default to INFO (the value of defaultLevel in
    # common/flogging/logging.go)

    # Default for all modules running within the scope of a peer.
    # Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL
    #       are not set
    peer:       info

    # The overall default values mentioned above can be overridden for the
    # specific components listed in the override section below.

    # Override levels for various peer modules. These levels will be
    # applied once the peer has completely started. They are applied at this
    # time in order to be sure every logger has been registered with the
    # logging package.
    # Note: the modules listed below are the only acceptable modules at this
    #       time.
    cauthdsl:   warning
    gossip:     warning
    ledger:     info
    msp:        warning
    policies:   warning
    grpc:       error

    # Message format for the peer logs
    format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

###############################################################################
#
#    Peer section
#
###############################################################################
peer:

    # The Peer id is used for identifying this Peer instance.
    id: jdoe

    # The networkId allows for logical seperation of networks
    networkId: dev

    # The Address at local network interface this Peer will listen on.
    # By default, it will listen on all network interfaces
    listenAddress: 0.0.0.0:7051

    # The endpoint this peer uses to listen for inbound chaincode connections.
    #
    # The chaincode connection does not support TLS-mutual auth. Having a
    # separate listener for the chaincode helps isolate the chaincode
    # environment for enhanced security, so it is strongly recommended to
    # uncomment chaincodeListenAddress and specify a protected endpoint.
    #
    # If chaincodeListenAddress is not configured or equals to the listenAddress,
    # listenAddress will be used for chaincode connections. This is not
    # recommended for production.
    #
    # chaincodeListenAddress: 127.0.0.1:7052

    # When used as peer config, this represents the endpoint to other peers
    # in the same organization for peers in other organization, see
    # gossip.externalEndpoint for more info.
    # When used as CLI config, this means the peer's endpoint to interact with
    address: 0.0.0.0:7051

    # Whether the Peer should programmatically determine its address
    # This case is useful for docker containers.
    addressAutoDetect: false

    # Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the
    # current setting
    gomaxprocs: -1

    # Gossip related configuration
    gossip:
        # Bootstrap set to initialize gossip with.
        # This is a list of other peers that this peer reaches out to at startup.
        # Important: The endpoints here have to be endpoints of peers in the same
        # organization, because the peer would refuse connecting to these endpoints
        # unless they are in the same organization as the peer.
        bootstrap: 127.0.0.1:7051

        # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive.
        # Setting both to true would result in the termination of the peer
        # since this is undefined state. If the peers are configured with
        # useLeaderElection=false, make sure there is at least 1 peer in the
        # organization that its orgLeader is set to true.

        # Defines whenever peer will initialize dynamic algorithm for
        # "leader" selection, where leader is the peer to establish
        # connection with ordering service and use delivery protocol
        # to pull ledger blocks from ordering service. It is recommended to
        # use leader election for large networks of peers.
        useLeaderElection: false
        # Statically defines peer to be an organization "leader",
        # where this means that current peer will maintain connection
        # with ordering service and disseminate block across peers in
        # its own organization
        orgLeader: true

        # Overrides the endpoint that the peer publishes to peers
        # in its organization. For peers in foreign organizations
        # see 'externalEndpoint'
        endpoint:
        # Maximum count of blocks stored in memory
        maxBlockCountToStore: 100
        # Max time between consecutive message pushes(unit: millisecond)
        maxPropagationBurstLatency: 10ms
        # Max number of messages stored until a push is triggered to remote peers
        maxPropagationBurstSize: 10
        # Number of times a message is pushed to remote peers
        propagateIterations: 1
        # Number of peers selected to push messages to
        propagatePeerNum: 3
        # Determines frequency of pull phases(unit: second)
        pullInterval: 4s
        # Number of peers to pull from
        pullPeerNum: 3
        # Determines frequency of pulling state info messages from peers(unit: second)
        requestStateInfoInterval: 4s
        # Determines frequency of pushing state info messages to peers(unit: second)
        publishStateInfoInterval: 4s
        # Maximum time a stateInfo message is kept until expired
        stateInfoRetentionInterval:
        # Time from startup certificates are included in Alive messages(unit: second)
        publishCertPeriod: 10s
        # Should we skip verifying block messages or not (currently not in use)
        skipBlockVerification: false
        # Dial timeout(unit: second)
        dialTimeout: 3s
        # Connection timeout(unit: second)
        connTimeout: 2s
        # Buffer size of received messages
        recvBuffSize: 20
        # Buffer size of sending messages
        sendBuffSize: 200
        # Time to wait before pull engine processes incoming digests (unit: second)
        digestWaitTime: 1s
        # Time to wait before pull engine removes incoming nonce (unit: second)
        requestWaitTime: 1s
        # Time to wait before pull engine ends pull (unit: second)
        responseWaitTime: 2s
        # Alive check interval(unit: second)
        aliveTimeInterval: 5s
        # Alive expiration timeout(unit: second)
        aliveExpirationTimeout: 25s
        # Reconnect interval(unit: second)
        reconnectInterval: 25s
        # This is an endpoint that is published to peers outside of the organization.
        # If this isn't set, the peer will not be known to other organizations.
        externalEndpoint:
        # Leader election service configuration
        election:
            # Longest time peer waits for stable membership during leader election startup (unit: second)
            startupGracePeriod: 15s
            # Interval gossip membership samples to check its stability (unit: second)
            membershipSampleInterval: 1s
            # Time passes since last declaration message before peer decides to perform leader election (unit: second)
            leaderAliveThreshold: 10s
            # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second)
            leaderElectionDuration: 5s

    # EventHub related configuration
    events:
        # The address that the Event service will be enabled on the peer
        address: 0.0.0.0:7053

        # total number of events that could be buffered without blocking send
        buffersize: 100

        # timeout duration for producer to send an event.
        # if < 0, if buffer full, unblocks immediately and not send
        # if 0, if buffer full, will block and guarantee the event will be sent out
        # if > 0, if buffer full, blocks till timeout
        timeout: 10ms

    # TLS Settings
    # Note that peer-chaincode connections through chaincodeListenAddress is
    # not mutual TLS auth. See comments on chaincodeListenAddress for more info
    tls:
        enabled:  false
        cert:
            file: tls/server.crt
        key:
            file: tls/server.key
        rootcert:
            file: tls/ca.crt

        # The server name use to verify the hostname returned by TLS handshake
        serverhostoverride:

    # Path on the file system where peer will store data (eg ledger). This
    # location must be access control protected to prevent unintended
    # modification that might corrupt the peer operations.
    fileSystemPath: /var/hyperledger/production

    # BCCSP (Blockchain crypto provider): Select which crypto implementation or
    # library to use
    BCCSP:
        Default: SW
        SW:
            # TODO: The default Hash and Security level needs refactoring to be
            # fully configurable. Changing these defaults requires coordination
            # SHA2 is hardcoded in several places, not only BCCSP
            Hash: SHA2
            Security: 256
            # Location of Key Store
            FileKeyStore:
                # If "", defaults to 'mspConfigPath'/keystore
                # TODO: Ensure this is read with fabric/core/config.GetPath() once ready
                KeyStore:

    # Path on the file system where peer will find MSP local configurations
    mspConfigPath: msp

    # Identifier of the local MSP
    # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!----
    # Deployers need to change the value of the localMspId string.
    # In particular, the name of the local MSP ID of a peer needs
    # to match the name of one of the MSPs in each of the channel
    # that this peer is a member of. Otherwise this peer's messages
    # will not be identified as valid by other nodes.
    localMspId: DEFAULT

    # Used with Go profiling tools only in none production environment. In
    # production, it should be disabled (eg enabled: false)
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060

###############################################################################
#
#    VM section
#
###############################################################################
vm:

    # Endpoint of the vm management system.  For docker can be one of the following in general
    # unix:///var/run/docker.sock
    # http://localhost:2375
    # https://localhost:2376
    endpoint: unix:///var/run/docker.sock

    # settings for docker vms
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key

        # Enables/disables the standard out/err from chaincode containers for
        # debugging purposes
        attachStdout: false

        # Parameters on creating docker container.
        # Container may be efficiently created using ipam & dns-server for cluster
        # NetworkMode - sets the networking mode for the container. Supported
        # standard values are: `host`(default),`bridge`,`ipvlan`,`none`.
        # Dns - a list of DNS servers for the container to use.
        # Note:  `Privileged` `Binds` `Links` and `PortBindings` properties of
        # Docker Host Config are not supported and will not be used if set.
        # LogConfig - sets the logging driver (Type) and related options
        # (Config) for Docker. For more info,
        # https://docs.docker.com/engine/admin/logging/overview/
        # Note: Set LogConfig using Environment Variables is not supported.
        hostConfig:
            NetworkMode: host
            Dns:
               # - 192.168.0.1
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648

###############################################################################
#
#    Chaincode section
#
###############################################################################
chaincode:
    # This is used if chaincode endpoint resolution fails with the
    # chaincodeListenAddress property
    peerAddress:

    # The id is used by the Chaincode stub to register the executing Chaincode
    # ID with the Peer and is generally supplied through ENV variables
    # the `path` form of ID is provided when installing the chaincode.
    # The `name` is used for all other requests and can be any string.
    id:
        path:
        name:

    # Generic builder environment, suitable for most chaincode types
    builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)

    golang:
        # golang will never need more than baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    car:
        # car may need more facilities (JVM, etc) in the future as the catalog
        # of platforms are expanded.  For now, we can just use baseos
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)

    java:
        # This is an image based on java:openjdk-8 with addition compiler
        # tools added for java shim layer packaging.
        # This image is packed with shim layer libraries that are necessary
        # for Java chaincode runtime.
        Dockerfile:  |
            from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)

    # Timeout duration for starting up a container and waiting for Register
    # to come through. 1sec should be plenty for chaincode unit tests
    startuptimeout: 300s

    # Timeout duration for Invoke and Init calls to prevent runaway.
    # This timeout is used by all chaincodes in all the channels, including
    # system chaincodes.
    # Note that during Invoke, if the image is not available (e.g. being
    # cleaned up when in development environment), the peer will automatically
    # build the image, which might take more time. In production environment,
    # the chaincode image is unlikely to be deleted, so the timeout could be
    # reduced accordingly.
    executetimeout: 30s

    # There are 2 modes: "dev" and "net".
    # In dev mode, user runs the chaincode after starting peer from
    # command line on local machine.
    # In net mode, peer will run chaincode in a docker container.
    mode: net

    # keepalive in seconds. In situations where the communiction goes through a
    # proxy that does not support keep-alive, this parameter will maintain connection
    # between peer and chaincode.
    # A value <= 0 turns keepalive off
    keepalive: 0

    # system chaincodes whitelist. To add system chaincode "myscc" to the
    # whitelist, add "myscc: enable" to the list below, and register in
    # chaincode/importsysccs.go
    system:
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable

    # Logging section for the chaincode container
    logging:
      # Default level for all loggers within the chaincode container
      level:  info
      # Override default level for the 'shim' module
      shim:   warning
      # Format for the chaincode container logs
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'

###############################################################################
#
#    Ledger section - ledger configuration encompases both the blockchain
#    and the state
#
###############################################################################
ledger:

  blockchain:

  state:
    # stateDatabase - options are "goleveldb", "CouchDB"
    # goleveldb - default state database stored in goleveldb.
    # CouchDB - store state database in CouchDB
    stateDatabase: goleveldb
    couchDBConfig:
       # It is recommended to run CouchDB on the same server as the peer, and
       # not map the CouchDB container port to a server port in docker-compose.
       # Otherwise proper security must be provided on the connection between
       # CouchDB client (on the peer) and server.
       couchDBAddress: 127.0.0.1:5984
       # This username must have read and write authority on CouchDB
       username:
       # The password is recommended to pass as an environment variable
       # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD).
       # If it is stored here, the file must be access control protected
       # to prevent unintended users from discovering the password.
       password:
       # Number of retries for CouchDB errors
       maxRetries: 3
       # Number of retries for CouchDB errors during peer startup
       maxRetriesOnStartup: 10
       # CouchDB request timeout (unit: duration, e.g. 20s)
       requestTimeout: 35s
       # Limit on the number of records to return per query
       queryLimit: 10000


  history:
    # enableHistoryDatabase - options are true or false
    # Indicates if the history of key updates should be stored.
    # All history 'index' will be stored in goleveldb, regardless if using
    # CouchDB or alternate database for the state.
enableHistoryDatabase: true
```

### 生成配置

    上文是fabirc核心的四個文件,主要調整config文件中的相關參數即可

    下面我們通過腳本生成配置文件

```shell
# 創建fabric目錄
mkdir -p /opt/fabric
# 解壓相關文件
tar zxf /usr/local/software/hyperledger-fabric-linux-amd64-1.0.3.tar.gz -C /opt/fabric/

tar zxf /usr/local/software/fabric.tar.gz -C /opt/

######### 因爲要保證證書和祕鑰相同,以下步驟在一臺電腦上執行,然後再分發到其他機器,begin

# 進入fabric configs目錄,
cd /opt/fabric/configs/
# 執行generateArtifacts.sh 腳本
../scripts/generateArtifacts.sh
# 這個做了以下操作
# 1.基於crypto-config.yaml生成公私鑰和證書信息,並保存在當前路徑生成的crypto-config文件夾中
# 2.生成創世區塊和通道相關信息,並保存在當前生成的channel-artifacts文件夾
# 感興趣的小夥伴可以自己看下生成後的文檔結構,這裏就不再列出了

# 將整個fabric文件分發到所有orderer和peer服務器中
scp -r /opt/fabric root@node"X":/opt/

######### end #########

# chaincode所需文件
mkdir -p ~/go/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

cp /usr/local/software/chaincode_example02.go ~/go/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02/
```

peer啓動腳本默認是peer0.org1.local的,所以這裏我們要修改另外三臺

node4  peer1.org1.local

```shell
sed -i 's/peer0/peer1/g' /opt/fabric/start-peer.sh
```

node5  peer0.org2.local

```shell
sed -i 's/org1/org2/g' /opt/fabric/start-peer.sh
sed -i 's/Org1/Org2/g' /opt/fabric/start-peer.sh
```

node6  peer1.org2.local

```shell
sed -i 's/peer0/peer1/g' /opt/fabric/start-peer.sh
sed -i 's/org1/org2/g' /opt/fabric/start-peer.sh
sed -i 's/Org1/Org2/g' /opt/fabric/start-peer.sh
```

至此,peer環境已搭建完畢,我們暫不啓動peer,等orderer完成並啓動後再啓動peer。

## 安裝orderer

node1、node2

```shell
# 因爲文件默認是node1(orderer1)的啓動腳本,這裏需要對node2(orderer2)的啓動腳本進行修改
# 在node2服務器執行後面命令 sed -i 's/orderer1/orderer2/g' /opt/fabric/start-orderer.sh
# 啓動orderer
/opt/fabric/start-orderer.sh
# 查看日誌
tail -99f ~/orderer.log
```

由於前面我們已經啓動了kafka和orderer,這裏再啓動peer

```
# 啓動四臺peer
/opt/fabric/start-peer.sh
# 查看日誌
tail -99f ~/peer.log
```

## 鏈上代碼的安裝與運行

    以上,整個Fabric網絡都準備完畢,接下來我們創建Channel、安裝和運行ChainCode。這個例子實現了a,b兩個賬戶,相互之間可以轉賬。

我們先在其中一個節點(peer0.org1.local)執行以下命令

```shell
cd ~

# 設置CLI的環境變量
export FABRIC_ROOT=/opt/fabric
export FABRIC_CFG_PATH=/opt/fabric/configs

export CORE_LOGGING_LEVEL=DEBUG
export CORE_LOGGING_PEER=debug
export CORE_LOGGING_FORMAT='%{color}%{time:15:04:05.000} [%{module}] [%{longfile}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
export CORE_CHAINCODE_LOGGING_LEVEL=DEBUG

export CORE_PEER_ID=peer0.org1.local
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_MSPCONFIGPATH=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/users/[email protected]/msp
export CORE_PEER_ADDRESS=peer0.org1.local:7051

export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_ROOTCERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/ca.crt
export CORE_PEER_TLS_KEY_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/server.key
export CORE_PEER_TLS_CERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org1.local/peers/peer0.org1.local/tls/server.crt

export ordererCa=$FABRIC_CFG_PATH/crypto-config/ordererOrganizations/orderer.local/orderers/orderer1.local/msp/tlscacerts/tlsca.orderer.local-cert.pem


# 創建Channel
# 系統會在當前目錄創建一個mychannel.block文件,這個文件非常重要,接下來其他節點要加入這個Channel就必須使用這個文件,要將這個文件分發到其他peer當中。
$FABRIC_ROOT/bin/peer channel create -o orderer1.local:7050 -f $FABRIC_CFG_PATH/channel-artifacts/channel.tx -c mychannel -t 30 --tls true --cafile $ordererCa

# 加入通道
$FABRIC_ROOT/bin/peer channel join -b mychannel.block

# 更新錨節點,這個我還是沒太理解,即使沒有設置錨節點,也不會影響整個網絡
$FABRIC_ROOT/bin/peer channel update -o orderer1.local:7050 -c mychannel -f $FABRIC_CFG_PATH/channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls true --cafile $ordererCa

# 安裝 chaincode
# 鏈上代碼的安裝需要在各個相關的Peer上進行,對於現在這種Fabric網絡,如果4個Peer都想對Example02進行操作,那麼就需要安裝4次
$FABRIC_ROOT/bin/peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

# Instantiate ChainCode
# 實例化鏈上代碼主要是在Peer所在的機器上對前面安裝好的鏈上代碼進行包裝,生成對應Channel的Docker鏡像和Docker容器。並且在實例化時我們可以指定背書策略。
$FABRIC_ROOT/bin/peer chaincode instantiate -o orderer1.local:7050 --tls true --cafile $ordererCa -C mychannel -n mycc -v 2.0 -c '{"Args":["init","a","100","b","200"]}' -P "OR  ('Org1MSP.member','Org2MSP.member')"

# 現在鏈上代碼的實例也有了,並且在實例化的時候指定了a賬戶100,b賬戶200,我們可以試着調用ChainCode的查詢代碼,驗證一下
$FABRIC_ROOT/bin/peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

# 返回結果:Query Result: 100

# 把a賬戶的10元轉給b
$FABRIC_ROOT/bin/peer chaincode invoke -o orderer1.local:7050  --tls true --cafile $ordererCa -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}'
```

在另一個節點(peer0.org2.local)上查詢

    前面的操作都是在org1下面做的,那麼加入同一個區塊鏈(mychannel)的org2,是否會看org1的更改呢?我們試着給peer0.org2.local加入mychannel、安裝鏈上代碼

```shell
# 設置CLI的環境變量

export FABRIC_ROOT=/opt/fabric
export FABRIC_CFG_PATH=/opt/fabric/configs

export CORE_LOGGING_LEVEL=DEBUG
export CORE_LOGGING_PEER=debug
export CORE_LOGGING_FORMAT='%{color}%{time:15:04:05.000} [%{module}] [%{longfile}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
export CORE_CHAINCODE_LOGGING_LEVEL=DEBUG

export CORE_PEER_ID=peer0.org2.local
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_MSPCONFIGPATH=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/users/[email protected]/msp
export CORE_PEER_ADDRESS=peer0.org2.local:7051

export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_ROOTCERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/ca.crt
export CORE_PEER_TLS_KEY_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/server.key
export CORE_PEER_TLS_CERT_FILE=$FABRIC_CFG_PATH/crypto-config/peerOrganizations/org2.local/peers/peer0.org2.local/tls/server.crt

export ordererCa=$FABRIC_CFG_PATH/crypto-config/ordererOrganizations/orderer.local/orderers/orderer2.local/msp/tlscacerts/tlsca.orderer.local-cert.pem

# 進入分發過來mychannel.block所在目錄,執行以下命令加入通道
$FABRIC_ROOT/bin/peer channel join -b mychannel.block

# 安裝 chaincode
$FABRIC_ROOT/bin/peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

# org1的時候實例化了,也就是說對應的區塊已經生成了,所以在org2不能再次初始化,直接查詢
$FABRIC_ROOT/bin/peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}'

# 運行該命令後要等很久才返回結果:
Query Result: 90
# 這是因爲peer0.org2也需要生成Docker鏡像,創建對應的容器,才能通過容器返回結果。我們執行docker ps -a,可以看到又多了一個容器;
```

其他節點也同樣如此,記得改環境變量

## 清除

```shell
# 清理pper數據
# 刪除docker容器和鏡像 
docker rm 節點所生成的容器id(通過docker ps -a 查詢)
docker rmi 節點生成的鏡像id (通過docker images 查詢)

rm -rf /var/hyperledger/*

# 清理orderer數據
/opt/fabric/stop-orderer.sh
rm -rf /var/hyperledger/*

# 清理kafka數據
# 關閉kafka 
kafka-server-stop.sh
rm -rf /data/kafka-logs/*
# 清理zookeeper數據(zookeeper 必須爲啓動狀態):
/usr/local/zookeeper-3.4.10/bin/./zkCli.sh

rmr /brokers
rmr /admin
rmr /config
rmr /isr_change_notification
quit

# 關閉zookeeper
zookeeper-server-stop.sh
#kafka經常會出現無法停止的現象,可通過jps查看進程,再通過kill殺掉
rm -rf /data/zookeeper/version*
```

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章