fabric 1.3.1 ,全手動部署到5臺機器上.支持 kafka 模式的共識機制和 couchdb 存儲,以及 fabric ca , fabric explorer的使用。

fabric 1.3.1 ,全手動部署到5臺機器上.支持 kafka 模式的共識機制和 couchdb 存儲,以及 fabric ca , fabric explorer的使用。

參考文檔
https://hyperledger-fabric.readthedocs.io/en/release-1.3/
https://www.lijiaocn.com/項目/2018/04/26/hyperledger-fabric-deploy.html
https://hyperledgercn.github.io/hyperledgerDocs/

系統環境:centos 7 64位
docker
docker-compose

A. Fabric 1.3.1 的安裝

一. 安裝docker

sudo yum -y remove docker docker-common container-selinux
sudo yum -y remove docker-selinux

sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum update

yum install docker-engine

systemctl enable docker

systemctl restart docker

二. 安裝docker-compose

docker-compose是docker集羣管理工具,可自定義一鍵啓動多個docker container。
官網二進制發佈:
https://github.com/docker/compose/releases
安裝手冊見網站 :
https://docs.docker.com/compose/install/
安裝命令如下:

curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose

docker-compose -v

三.準備環境。

IP host
192.168.188.110 cli.alcor.com
192.168.188.111 kafka.alcor.com
192.168.188.112 ca.alcor.com
192.168.188.113 explorer.alcor.com
192.168.188.120 orderer.alcor.com
192.168.188.221 peer0.org1.alcor.com
192.168.188.222 peer1.org1.alcor.com
192.168.188.223 peer0.org2.alcor.com
192.168.188.224 peer1.org2.alcor.com

每臺機器的 hostname 中都增加 ip 解析

vim /etc/hosts

192.168.188.110   cli.alcor.com
192.168.188.111   kafka.alcor.com
192.168.188.112   ca.alcor.com
192.168.188.113   explorer.alcor.com
192.168.188.120   orderer.alcor.com
192.168.188.221   peer0.org1.alcor.com
192.168.188.222   peer1.org1.alcor.com
192.168.188.223   peer0.org2.alcor.com
192.168.188.224   peer1.org2.alcor.com

工作目錄是 /root/fabric
在/root/fabric目錄下建立2個子目錄

  • /root/fabric/fabric-deploy 存放部署和配置內容
  • /root/fabric/fabric-images 存放自己製作的 docker images

四.安裝 kafka 和 zookeeper

我在這裏使用 docker-compose 安裝 zookeeper 和 kafka(3個 kafka 節點) 環境

配置文件存放在
/Users/roamer/Documents/Docker/本地虛擬機/kafka 目錄下

kafka 測試流程參考文檔:
kafka 的使用

五.下載 fabric 1.3.1

對應網站查看版本信息
https://nexus.hyperledger.org/#nexus-search;quick~fabric 1.3

1. 下載文件自己安裝

#登錄 cli 主機
mkdir -p /root/fabric/fabric-deploy 
cd  ~/fabric/fabric-deploy
wget https://nexus.hyperledger.org/service/local/repositories/releases/content/org/hyperledger/fabric/hyperledger-fabric-1.3.1-stable/linux-amd64.1.3.1-stable-ce1bd72/hyperledger-fabric-1.3.1-stable-linux-amd64.1.3.1-stable-ce1bd72.tar.gz

2. 用 md5sum 命令進行文件校驗

3. 解壓fabric

tar -xvf hyperledger-fabric-1.3.1-stable-linux-amd64.1.3.1-stable-ce1bd72.tar.gz

4. 理解 bin 目錄和 config 目錄下的文件

六. hyperledger 的證書準備

證書的準備方式有兩種,一種用cryptogen命令生成,一種是通過fabric-ca服務生成。

1. 通過cryptogen 來生成

創建一個配置文件crypto-config.yaml,這裏配置了兩個組織,org1和 org2的Template 的 Count是2,表示各自兩個peer。

vim crypto-config.yaml
    
#文件內容如下:
OrdererOrgs:
  - Name: Orderer
    Domain: alcor.com
    Specs:
      - Hostname: orderer
PeerOrgs:
  - Name: Org1
    Domain: org1.alcor.com
    Template:
      Count: 2
    Users:
      Count: 2
  - Name: Org2
    Domain: org2.alcor.com
    Template:
      Count: 2
    Users:
      Count: 2

生成證書, 所有的文件存放在 /root/fabric/fabric-deploy/certs 目錄下

cd /root/fabric/fabric-deploy
./bin/cryptogen generate --config=crypto-config.yaml --output ./certs

2. 通過 ca 服務來生成

在後續章節進行介紹

七. hyperledger fabric 中的Orderer 配置和安裝文件的準備

1. 建立一個存放orderer 配置文件的目錄,用於以後複製到 orderer 主機上直接運行 orderer(支持 kafka)

cd /root/fabric/fabric-deploy
mkdir orderer.alcor.com
cd orderer.alcor.com

2. 先將bin/orderer以及證書複製到orderer.alcor.com目錄中。

cd /root/fabric/fabric-deploy
cp ./bin/orderer orderer.alcor.com
cp -rf ./certs/ordererOrganizations/alcor.com/orderers/orderer.alcor.com/* ./orderer.alcor.com/

3. 然後準備orderer的配置文件orderer.alcor.com/orderer.yaml

vi /root/fabric/fabric-deploy/orderer.alcor.com/orderer.yaml
#內容如下
General:
    LedgerType: file
    ListenAddress: 0.0.0.0
    ListenPort: 7050
    TLS:
        Enabled: true
        PrivateKey: ./tls/server.key
        Certificate: ./tls/server.crt
        RootCAs:
          - ./tls/ca.crt
#        ClientAuthEnabled: false
#        ClientRootCAs:
    LogLevel: debug
    LogFormat: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
#    GenesisMethod: provisional
    GenesisMethod: file
    GenesisProfile: SampleInsecureSolo
    GenesisFile: ./genesisblock
    LocalMSPDir: ./msp
    LocalMSPID: OrdererMSP
    Profile:
        Enabled: false
        Address: 0.0.0.0:6060
    BCCSP:
        Default: SW
        SW:
            Hash: SHA2
            Security: 256
            FileKeyStore:
                KeyStore:
FileLedger:
    Location:  /opt/fabric/orderer/data
    Prefix: hyperledger-fabric-ordererledger
RAMLedger:
    HistorySize: 1000
Kafka:
    Retry:
        ShortInterval: 5s
        ShortTotal: 10m
        LongInterval: 5m
        LongTotal: 12h
        NetworkTimeouts:
            DialTimeout: 10s
            ReadTimeout: 10s
            WriteTimeout: 10s
        Metadata:
            RetryBackoff: 250ms
            RetryMax: 3
        Producer:
            RetryBackoff: 100ms
            RetryMax: 3
        Consumer:
            RetryBackoff: 2s
    Verbose: false
    TLS:
      Enabled: false
      PrivateKey:
        #File: path/to/PrivateKey
      Certificate:
        #File: path/to/Certificate
      RootCAs:
        #File: path/to/RootCAs
    Version:

注意,orderer將被部署在目標機器(orderer.alcor.com)的/opt/fabric/orderer目錄中,如果要部署在其它目錄中,需要修改配置文件中路徑。

4. 這裏需要用到一個data目錄,存放orderer的數據:

mkdir -p /root/fabric/fabric-deploy/orderer.alcor.com/data

5. 創建一個啓動 orderer 的批處理文件

vi  /root/fabric/fabric-deploy/orderer.alcor.com/startOrderer.sh

在startOrderer.sh 中輸入如下內容

#!/bin/bash
cd /opt/fabric/orderer
./orderer 2>&1 |tee log

修改成可以執行文件

chmod +x  /root/fabric/fabric-deploy/orderer.alcor.com/startOrderer.sh

八. hyperledger fabric 中的Peer 配置和安裝文件的準備

建立4個存放peer 配置信息的目錄

1. 先設置 peer0.org1.alcor.com

mkdir -p  /root/fabric/fabric-deploy/peer0.org1.alcor.com
a. 複製 peer 執行文件和證書文件
cd /root/fabric/fabric-deploy
cp bin/peer peer0.org1.alcor.com/
cp -rf certs/peerOrganizations/org1.alcor.com/peers/peer0.org1.alcor.com/* peer0.org1.alcor.com/
注意: 一定要複製對應的 peer 和 org 的目錄。否則會出現各種錯誤
b. 生成 peer0.org1.alcor.com 的core.yaml 文件
這裏是基於 fabric 1.3.1版本修改的core.yaml 文件。不兼容fabric 1.2 版本 並且是使用 CouchDB 取代缺省的 LevelDB
vi /root/fabric/fabric-deploy/peer0.org1.alcor.com/core.yaml
#內容如下:
logging:
    level:      info
    cauthdsl:   warning
    gossip:     warning
    grpc:       error
    ledger:     info
    msp:        warning
    policies:   warning
    peer:
        gossip: warning
    
    format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
    
peer:
    
    id: peer0.org1.alcor.com
    
    networkId: dev
    
    listenAddress: 0.0.0.0:7051
    
    address: 0.0.0.0:7051
    
    addressAutoDetect: false
    
    gomaxprocs: -1
    
    keepalive:
        minInterval: 60s
        client:
            interval: 60s
            timeout: 20s
        deliveryClient:
            interval: 60s
            timeout: 20s
    
    gossip:
        bootstrap: peer0.org1.alcor.com:7051
    
        useLeaderElection: true
        orgLeader: false
    
        endpoint:
        maxBlockCountToStore: 100
        maxPropagationBurstLatency: 10ms
        maxPropagationBurstSize: 10
        propagateIterations: 1
        propagatePeerNum: 3
        pullInterval: 4s
        pullPeerNum: 3
        requestStateInfoInterval: 4s
        publishStateInfoInterval: 4s
        stateInfoRetentionInterval:
        publishCertPeriod: 10s
        skipBlockVerification: false
        dialTimeout: 3s
        connTimeout: 2s
        recvBuffSize: 20
        sendBuffSize: 200
        digestWaitTime: 1s
        requestWaitTime: 1500ms
        responseWaitTime: 2s
        aliveTimeInterval: 5s
        aliveExpirationTimeout: 25s
        reconnectInterval: 25s
        externalEndpoint:
        election:
            startupGracePeriod: 15s
            membershipSampleInterval: 1s
            leaderAliveThreshold: 10s
            leaderElectionDuration: 5s
        pvtData:
            pullRetryThreshold: 60s
            transientstoreMaxBlockRetention: 1000
            pushAckTimeout: 3s
            btlPullMargin: 10
            reconcileBatchSize: 10
            reconcileSleepInterval: 5m
    
    tls:
        enabled:  true
        clientAuthRequired: false
        cert:
            file: tls/server.crt
        key:
            file: tls/server.key
        rootcert:
            file: tls/ca.crt
        clientRootCAs:
            files:
              - tls/ca.crt
        clientKey:
            file:
        clientCert:
            file:
    
    authentication:
        timewindow: 15m
    
    fileSystemPath: /var/hyperledger/production
    
    BCCSP:
        Default: SW
        SW:
            Hash: SHA2
            Security: 256
            FileKeyStore:
                KeyStore:
        PKCS11:
            Library:
            Label:
            Pin:
            Hash:
            Security:
            FileKeyStore:
                KeyStore:
    
    mspConfigPath: msp
    
    localMspId: Org1MSP
    
    client:
        connTimeout: 3s
    
    deliveryclient:
        reconnectTotalTimeThreshold: 3600s
    
        connTimeout: 3s
    
        reConnectBackoffThreshold: 3600s
    
    localMspType: bccsp
    
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060
    adminService:
    handlers:
        authFilters:
          -
            name: DefaultAuth
          -
            name: ExpirationCheck    # This filter checks identity x509 certificate expiration
        decorators:
          -
            name: DefaultDecorator
        endorsers:
          escc:
            name: DefaultEndorsement
            library:
        validators:
          vscc:
            name: DefaultValidation
            library:
    validatorPoolSize:
    discovery:
        enabled: true
        authCacheEnabled: true
        authCacheMaxSize: 1000
        authCachePurgeRetentionRatio: 0.75
        orgMembersAllowedAccess: false
    
vm:
    endpoint: unix:///var/run/docker.sock
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key
        attachStdout: false
        hostConfig:
            NetworkMode: host
            Dns:
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648
    
    
chaincode:
    id:
        path:
        name:
    
    builder: $(DOCKER_NS)/fabric-ccenv:latest
    pull: false
    
    golang:
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)
        dynamicLink: false
    
    car:
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)
    
    java:
        runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)
    
    node:
        runtime: $(BASE_DOCKER_NS)/fabric-baseimage:$(ARCH)-$(BASE_VERSION)
    startuptimeout: 300s
    
    executetimeout: 30s
    mode: net
    keepalive: 0
    system:
        +lifecycle: enable
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable
    systemPlugins:
    logging:
      level:  info
      shim:   warning
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
    
    
ledger:
    
  blockchain:
    
  state:
    stateDatabase: CouchDB     #goleveldb
    totalQueryLimit: 100000
    couchDBConfig:
       couchDBAddress: 127.0.0.1:5984
       username:    admin
       password:    password
       maxRetries: 3
       maxRetriesOnStartup: 10
       requestTimeout: 35s
       internalQueryLimit: 1000
       maxBatchUpdateSize: 1000
       warmIndexesAfterNBlocks: 1
       createGlobalChangesDB: false
    
  history:
    enableHistoryDatabase: true
    
    
metrics:
    enabled: false
    reporter: statsd
    interval: 1s
    statsdReporter:
          address: 0.0.0.0:8125
          flushInterval: 2s
          flushBytes: 1432
    promReporter:
          listenAddress: 0.0.0.0:8080

c. 建立 data 目錄
mkdir -p /root/fabric/fabric-deploy/peer0.org1.alcor.com/data
d. 創建啓動的批處理文件
vi  /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh

在文件中輸入以下內容:

#!/bin/bash
cd /opt/fabric/peer
./peer node start 2>&1 |tee log

設置爲可執行文件

chmod +x /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh

2. 設置 peer1.org1.alcor.com

mkdir -p /root/fabric/fabric-deploy/peer1.org1.alcor.com
a.複製 peer 執行文件和證書文件
cd /root/fabric/fabric-deploy
cp bin/peer     peer1.org1.alcor.com/
cp -rf certs/peerOrganizations/org1.alcor.com/peers/peer1.org1.alcor.com/* peer1.org1.alcor.com/
b. 最後修改peer1.org1.alcor.com/core.yml,將其中的peer0.org1.alcor.com修改爲peer1.org1.alcor.com,這裏直接用sed命令替換:
cd /root/fabric/fabric-deploy
cp peer0.org1.alcor.com/core.yaml  peer1.org1.alcor.com
sed -i "s/peer0.org1.alcor.com/peer1.org1.alcor.com/g" peer1.org1.alcor.com/core.yaml
c.建立 data 目錄
mkdir -p /root/fabric/fabric-deploy/peer1.org1.alcor.com/data
d.複製 staratPeer.sh 文件
cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh  peer1.org1.alcor.com/

3.設置 peer0.org2.alcor.com

    mkdir -p /root/fabric/fabric-deploy/peer0.org2.alcor.com
a. 複製 peer 執行文件和證書文件
cd /root/fabric/fabric-deploy
cp bin/peer     peer0.org2.alcor.com/
cp -rf certs/peerOrganizations/org2.alcor.com/peers/peer0.org2.alcor.com/* peer0.org2.alcor.com/
b.最後修改peer0.org1.alcor.com/core.yml,將其中的peer0.org1.alcor.com修改爲peer0.org2.alcor.com,這裏直接用sed命令替換:
cd /root/fabric/fabric-deploy
cp peer0.org1.alcor.com/core.yaml  peer0.org2.alcor.com
sed -i "s/peer0.org1.alcor.com/peer0.org2.alcor.com/g" peer0.org2.alcor.com/core.yaml
c. 將配置文件中Org1MSP替換成Org2MSP:
sed -i "s/Org1MSP/Org2MSP/g" peer0.org2.alcor.com/core.yaml    
d.建立 data 目錄
mkdir -p /root/fabric/fabric-deploy/peer0.org2.alcor.com/data
e.複製 staratPeer.sh 文件
cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh  peer0.org2.alcor.com/

4. 設置 peer1.org2.alcor.com

mkdir -p /root/fabric/fabric-deploy/peer1.org2.alcor.com
a. 複製 peer 執行文件和證書文件
cd /root/fabric/fabric-deploy
cp bin/peer     peer1.org2.alcor.com/
cp -rf certs/peerOrganizations/org2.alcor.com/peers/peer1.org2.alcor.com/* peer1.org2.alcor.com/
b. 最後修改peer0.org1.alcor.com/core.yml,將其中的peer0.org1.alcor.com修改爲peer1.org2.alcor.com,這裏直接用sed命令替換:
cd /root/fabric/fabric-deploy
cp peer0.org1.alcor.com/core.yaml  peer1.org2.alcor.com
sed -i "s/peer0.org1.alcor.com/peer1.org2.alcor.com/g" peer1.org2.alcor.com/core.yaml
c. 將配置文件中Org1MSP替換成Org2MSP:
sed -i "s/Org1MSP/Org2MSP/g" peer1.org2.alcor.com/core.yaml    
d. 建立 data 目錄
mkdir -p /root/fabric/fabric-deploy/peer1.org2.alcor.com/data
e. 複製 staratPeer.sh 文件
cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh  peer1.org2.alcor.com/

九. 準備hyperledger fabric 中的 order 和 peer 目標機器上的 配置文件部署

把準備好的 order 和 peer 上的配置文件複製到宿主機器上。
由於所有配置文件都是在 cli.alcor.com 機器上準備的,所以通過以下步驟複製到相應的主機上。目標地址按照配置文件都是存放在宿主機器/opt/fabric 目錄下。

1. 複製到 orderer.alcor.com

# 在 orderer.alcor.com 機器上建立 /opt/fabric/orderer 目錄
mkdir -p /opt/fabric/orderer
#回到 cli.alcor.com機器上,把 orderer的配置文件複製過去
cd /root/fabric/fabric-deploy
scp -r orderer.alcor.com/* [email protected]:/opt/fabric/orderer/

2. 複製到peer0.org1.alcor.com

# 在 peer0.org1.alcor.com 機器上建立 /opt/fabric/peer 目錄
mkdir -p /opt/fabric/peer
#回到 cli.alcor.com機器上,把 peer0.org1.alcor.com的配置文件複製過去
cd /root/fabric/fabric-deploy
scp -r peer0.org1.alcor.com/* [email protected]:/opt/fabric/peer/

3. 複製到peer1.org1.alcor.com

# 在 peer1.org1.alcor.com 機器上建立 /opt/fabric/peer 目錄
 mkdir -p /opt/fabric/peer
 #回到 cli.alcor.com機器上,把 peer1.org1.alcor.com 的配置文件複製過去
 cd /root/fabric/fabric-deploy
 scp -r peer1.org1.alcor.com/* [email protected]:/opt/fabric/peer/

4. 複製到peer0.org2.alcor.com

# 在 peer0.org2.alcor.com 機器上建立 /opt/fabric/peer 目錄
mkdir -p /opt/fabric/peer
#回到 cli.alcor.com機器上,把 peer0.org2.alcor.com的配置文件複製過去
cd /root/fabric/fabric-deploy
scp -r peer0.org2.alcor.com/* [email protected]:/opt/fabric/peer/

5. 複製到peer1.org2.alcor.com

# 在 peer1.org2.alcor.com 機器上建立 /opt/fabric/peer 目錄
mkdir -p /opt/fabric/peer
#回到 cli.alcor.com機器上,把 peer1.org2.alcor.com的配置文件複製過去
cd /root/fabric/fabric-deploy
scp -r peer1.org2.alcor.com/* [email protected]:/opt/fabric/peer/

十. 準備創世紀區塊 genesisblock(kafka 模式)

1. 在 cli 機器的 /root/fabric/fabric-deploy/目錄下,準備創世紀塊的生成配置文件 configtx.yaml

vi /root/fabric/fabric-deploy/configtx.yaml
    
#文件內容如下:
Organizations:
    - &OrdererOrg
        Name: OrdererOrg
        ID: OrdererMSP
        MSPDir: ./certs/ordererOrganizations/alcor.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('OrdererMSP.admin')"
    - &Org1
        Name: Org1MSP
        ID: Org1MSP
        MSPDir: ./certs/peerOrganizations/org1.alcor.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org1MSP.admin', 'Org1MSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('Org1MSP.admin', 'Org1MSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('Org1MSP.admin')"
        AnchorPeers:
            - Host: peer0.org1.alcor.com
              Port: 7051
    - &Org2
        Name: Org2MSP
        ID: Org2MSP
        MSPDir: ./certs/peerOrganizations/org2.alcor.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org2MSP.admin', 'Org2MSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('Org2MSP.admin', 'Org2MSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('Org2MSP.admin')"
        AnchorPeers:
            - Host: peer0.org2.alcor.com
              Port: 7051
    
Capabilities:
    Channel: &ChannelCapabilities
        V1_3: true
    Orderer: &OrdererCapabilities
        V1_1: true
    Application: &ApplicationCapabilities
        V1_3: true
        V1_2: false
        V1_1: false
    
Application: &ApplicationDefaults
    Organizations:
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
    Capabilities:
        <<: *ApplicationCapabilities    
    
Orderer: &OrdererDefaults
    OrdererType: kafka
    Addresses:
        - orderer.alcor.com:7050
    BatchTimeout: 2s
    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 99 MB
        PreferredMaxBytes: 512 KB
    Kafka:
        Brokers:
            - kafka.alcor.com:9092       # 可以填入多個kafka節點的地址
            - kafka.alcor.com:9093
            - kafka.alcor.com:9094
    Organizations:
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
        BlockValidation:
            Type: ImplicitMeta
            Rule: "ANY Writers"
    Capabilities:
        <<: *OrdererCapabilities
    
Channel: &ChannelDefaults
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
    Capabilities:
        <<: *ChannelCapabilities
    
Profiles:
    TwoOrgsOrdererGenesis:
        <<: *ChannelDefaults
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *Org1
                    - *Org2
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2

踩坑
此版本是 fabric 1.3.1版本下使用的配置文件。不向下兼容(不能用在1.2和之前的版本)。

2. 生成創世紀區塊

cd /root/fabric/fabric-deploy
./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./genesisblock -channelID genesis

生成創世紀區塊文件 genesisblock ,並且指定創世區塊的 channel id 是 genesis

3. 然後把區塊文件 genesisblock 複製到 oderer.alcor.com機器上

#登錄到 cli 主機
cd /root/fabric/fabric-deploy
scp ./genesisblock  [email protected]:/opt/fabric/orderer

十一. 啓動 orderer 和 peer

1. 啓動 orderer

# 進入 orderer.alcor.com 主機的 /opt/fabric/orderer 目錄,以後臺進程方式啓動orderer
nohup ./startOrderer.sh &

啓動成功後,可以去任意一臺 kafka 服務器上的控制檯查看 topic 列表,是否有一個 genesis 的 channel。

/opt/kafka_2.11-1.1.1/bin/kafka-topics.sh --zookeeper 192.168.188.111:2181 --list

2. 在4個 peer 上安裝 couchDB

詳細介紹查看 :
fabric peer 節點使用 CouchDB 來替換 LevelDB.

3. 啓動4個 peer

#分別進入4個 peer 主機的 /opt/fabric/peer 目錄
#以後臺進程方式啓動 peer
nohup ./startPeer.sh &

4. 把 peer 主機上的 peer 進程註冊成開機啓動

在/etc/init.d 目錄下建立一個 autoRunPeer.sh 文件。並且修改成可執行權限。
文件內容如下:

#!/bin/sh
#chkconfig: 2345 80 90 
#表示在2/3/4/5運行級別啓動,啓動序號(S80),關閉序號(K90); 
/usr/bin/nohup /opt/fabric/peer/startPeer.sh &

添加腳本到開機自動啓動項目中

chkconfig --add autoRunPeer.sh
chkconfig autoRunPeer.sh on

5. 把 orderer 主機上的 orderer 進程註冊成開機啓動

在/etc/init.d 目錄下建立一個 autoRunOrderer.sh 文件。並且修改成可執行權限。
文件內容如下:

#!/bin/sh
#chkconfig: 2345 80 90 
#表示在2/3/4/5運行級別啓動,啓動序號(S80),關閉序號(K90); 
/usr/bin/nohup /opt/fabric/orderer/startOrderer.sh &

添加腳本到開機自動啓動項目中

chkconfig --add autoRunOrderer.sh
chkconfig autoRunOrderer.sh on

十二. 用戶賬號創建

1. 在 cli 機器上建立存放用戶賬號信息的目錄

cd  /root/fabric/fabric-deploy
mkdir users 
cd users

2. 創立 org1的Admin 用戶信息(對應到 peer0.org1.alcor.com 的節點)

a. 創建用於保存 org1 的 Admin 用戶信息的目錄
cd /root/fabric/fabric-deploy/users
mkdir [email protected]
cd  [email protected]
b. 複製[email protected]用戶的證書
cp -rf  /root/fabric/fabric-deploy/certs/peerOrganizations/org1.alcor.com/users/[email protected]/* /root/fabric/fabric-deploy/users/[email protected]/
    
c. 複製peer0.org1.alcor.com的配置文件(對應到 peer0.org1.alcor.com 的節點)
cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/core.yaml  /root/fabric/fabric-deploy/users/[email protected]/
d. 創建測試腳本(peer.sh)
#!/bin/bash
cd "/root/fabric/fabric-deploy/users/[email protected]"
PATH=`pwd`/../../bin:$PATH
export FABRIC_CFG_PATH=`pwd`
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_CERT_FILE=./tls/client.crt
export CORE_PEER_TLS_KEY_FILE=./tls/client.key
export CORE_PEER_MSPCONFIGPATH=./msp
export CORE_PEER_ADDRESS=peer0.org1.alcor.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=./tls/ca.crt
export CORE_PEER_ID=peer0.org1.alcor.com
export CORE_LOGGING_LEVEL=DEBUG
peer $*

注意:
其中的 pwd 工作目錄 和 CORE_PEER_ADDRESS , CORE_PEER_LOCALMSPID 要和 peer0.org1.alcor.com 節點對應

e. 運行 peer.sh 來查看節點 peer0.org1.aclor.com 的狀態
./peer.sh node status

-w1288

3. 創立 org1的 User1 用戶信息 (對應到 peer1.org1.alcor.com 的節點)

a. 創建保存 org1 的 User1 用戶信息的目錄(對應到 peer1.org1.alcor.com
其實是 Admin 的用戶證書,如果用的是User1的證書,在 peer node status 的時候,會出現錯誤: Error trying to connect to local peer: rpc error: code = Unknown desc = access denied
cd /root/fabric/fabric-deploy/users
mkdir [email protected]
cd  [email protected]
b. 複製[email protected]用戶的證書
cp -rf  /root/fabric/fabric-deploy/certs/peerOrganizations/org1.alcor.com/users/[email protected]/* /root/fabric/fabric-deploy/users/[email protected]/
    
c. 複製peer1.org1.alcor.com的配置文件(對應到 peer1.org1.alcor.com
cp /root/fabric/fabric-deploy/peer1.org1.alcor.com/core.yaml  /root/fabric/fabric-deploy/users/[email protected]/
d. 創建測試腳本(peer.sh)
#!/bin/bash
cd "/root/fabric/fabric-deploy/users/[email protected]"
PATH=`pwd`/../../bin:$PATH
export FABRIC_CFG_PATH=`pwd`
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_CERT_FILE=./tls/client.crt
export CORE_PEER_TLS_KEY_FILE=./tls/client.key
export CORE_PEER_MSPCONFIGPATH=./msp
export CORE_PEER_ADDRESS=peer1.org1.alcor.com:7051
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=./tls/ca.crt
export CORE_PEER_ID=peer1.org1.alcor.com
export CORE_LOGGING_LEVEL=DEBUG
peer $*

注意:
其中的 pwd 工作目錄 和 CORE_PEER_ADDRESS , CORE_PEER_LOCALMSPID 要和 peer1.org1.alcor.com 節點對應

e. 運行 peer.sh 來查看節點 peer1.org1.alcor.com 的狀態
./peer.sh node status

4. 創立 org2的Admin 用戶信息(對應到 peer0.org2.alcor.com 的節點)

a. 創建保存 org2 的 Admin 用戶信息的目錄
cd /root/fabric/fabric-deploy/users
mkdir [email protected]
cd  [email protected]
b. 複製[email protected]用戶的證書
cp -rf  /root/fabric/fabric-deploy/certs/peerOrganizations/org2.alcor.com/users/[email protected]/* /root/fabric/fabric-deploy/users/[email protected]/
    
c. 複製[email protected]的配置文件(對應到 peer0.org2.alcor.com 的節點)
cp /root/fabric/fabric-deploy/peer0.org2.alcor.com/core.yaml  /root/fabric/fabric-deploy/users/[email protected]/
d. 創建測試腳本(peer.sh)
#!/bin/bash
cd "/root/fabric/fabric-deploy/users/[email protected]"
PATH=`pwd`/../../bin:$PATH
export FABRIC_CFG_PATH=`pwd`
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_CERT_FILE=./tls/client.crt
export CORE_PEER_TLS_KEY_FILE=./tls/client.key
export CORE_PEER_MSPCONFIGPATH=./msp
export CORE_PEER_ADDRESS=peer0.org2.alcor.com:7051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_ROOTCERT_FILE=./tls/ca.crt
export CORE_PEER_ID=peer0.org2.alcor.com
export CORE_LOGGING_LEVEL=DEBUG
peer $*

注意:
其中的 pwd 工作目錄 和 CORE_PEER_ADDRESS , CORE_PEER_LOCALMSPID 要和 peer0.org1.alcor.com 節點對應

e. 運行 peer.sh 來查看節點 peer0.org2.alcor.com 的狀態
./peer.sh node status

5. 創立 org2的User1用戶信息(對應到 peer1.org2.alcor.com 的節點)

其實是 Admin 的用戶證書,如果用的是User1的證書,在 peer node status 的時候,會出現錯誤: Error trying to connect to local peer: rpc error: code = Unknown desc = access denied
a. 創建保存 org2 的 User1 用戶信息的目錄
cd /root/fabric/fabric-deploy/users
mkdir [email protected]
cd  [email protected]
b. 複製[email protected]用戶的證書
cp -rf  /root/fabric/fabric-deploy/certs/peerOrganizations/org2.alcor.com/users/[email protected]/* /root/fabric/fabric-deploy/users/[email protected]/
    
c. 複製[email protected]的配置文件(對應到 peer0.org2.alcor.com 的節點)
cp /root/fabric/fabric-deploy/peer1.org2.alcor.com/core.yaml  /root/fabric/fabric-deploy/users/[email protected]/
d. 創建測試腳本(peer.sh)
#!/bin/bash
cd "/root/fabric/fabric-deploy/users/[email protected]"
PATH=`pwd`/../../bin:$PATH
export FABRIC_CFG_PATH=`pwd`
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_CERT_FILE=./tls/client.crt
export CORE_PEER_TLS_KEY_FILE=./tls/client.key
export CORE_PEER_MSPCONFIGPATH=./msp
export CORE_PEER_ADDRESS=peer1.org2.alcor.com:7051
export CORE_PEER_LOCALMSPID=Org2MSP
export CORE_PEER_TLS_ROOTCERT_FILE=./tls/ca.crt
export CORE_PEER_ID=peer1.org2.alcor.com
export CORE_LOGGING_LEVEL=DEBUG
peer $*

注意:
其中的 pwd 工作目錄 和 CORE_PEER_ADDRESS , CORE_PEER_LOCALMSPID 要和 peer0.org1.alcor.com 節點對應

e. 運行 peer.sh 來查看節點 peer0.org2.alcor.com 的狀態
./peer.sh node status

十三. channel 的準備和創建

踩坑:channel ID 不能含有大寫字母(myTestChannel , myChannel 這種命名是不行的,在創建 channel 的時候,會報錯) initializing configtx manager failed: bad channel ID: channel ID 'myTestChannel' contains illegal characters

1. 準備channel 文件。用configtxgen生成channel文件。

configtxgen 命令會去當前目錄下的configtx.yaml(也可以通過FABRIC_CFG_PATH 指定) 中的profiles 部分下的和 -profile 參數對應的部分的內容,生成出一個 -outputCreateChannelTx 指定的輸出文件

 cd /root/fabric/fabric-deploy/
./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx mychannel.tx -channelID mychannel
    

十四. 創建 channel

1. 在[email protected]目錄中執行下面的命令:

cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh channel create -o orderer.alcor.com:7050 -c mychannel -f /root/fabric/fabric-deploy/mychannel.tx  -t 60s --tls true --cafile  /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem

執行完成後,會生成一個mychannel.block文件.

這個文件非常重要!所有加入到這個 channel 裏面的 peer,都需要用到這個文件

2.將mychannel.block複製一份到[email protected][email protected][email protected]中備用

\cp -f /root/fabric/fabric-deploy/users/[email protected]/mychannel.block  /root/fabric/fabric-deploy/users/[email protected]/    
\cp -f /root/fabric/fabric-deploy/users/[email protected]/mychannel.block  /root/fabric/fabric-deploy/users/[email protected]/
\cp -f /root/fabric/fabric-deploy/users/[email protected]/mychannel.block  /root/fabric/fabric-deploy/users/[email protected]/    

十五.把 4個 peer加入到 channel 中

1. 把peer0.org1.alcor.com 加入到 channle 中

cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh channel join -b mychannel.block
#控制檯返回成功後,可以用下面命令來查看
./peer.sh channel list

2. 把peer1.org1.alcor.com 加入到 channle 中

cd  /root/fabric/fabric-deploy/users/[email protected] #這個其實還是org1.alcor.com 的 Admin 用戶
./peer.sh channel join -b mychannel.block
#控制檯返回成功後,可以用下面命令來查看
./peer.sh channel list

3. 把peer0.org2.alcor.com 加入到 channle 中

cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh channel join -b mychannel.block
#控制檯返回成功後,可以用下面命令來查看
./peer.sh channel list

4. 把peer1.org2.alcor.com 加入到 channle 中

cd  /root/fabric/fabric-deploy/users/[email protected] #這個其實還是org2.alcor.com 的 Admin 用戶
./peer.sh channel join -b mychannel.block
#控制檯返回成功後,可以用下面命令來查看
./peer.sh channel list

十六.設置錨點 peer .

需要每個組織指定一個anchor peer,anchor peer是組織用來接收orderer下發的區塊的peer。
錨點的設置 已經在 configtx.yaml 文件中配置,不需要在進行 peer channel update 操作了。

十七. go 版本的 chaincode 的安裝和部署(在 cli 主機上操作)

1. 安裝 go 環境

go 的下載官網

https://golang.org/dl/

以 root 用戶安裝

wget https://dl.google.com/go/go1.10.3.linux-amd64.tar.gz

tar -xvf  go1.10.3.linux-amd64.tar.gz

mv ./go  /usr/local

#修改 /etc/profile,增加 如下2行內容
export GOROOT=/usr/local/go
export PATH=$PATH:$GOROOT/bin
    
#使得環境變量生效
source /etc/profile
    
#確定 go 的安裝成功和版本信息
go version 
    
#查看 go 的環境
go env

2. 拉取 demo 的 chaincode

這個需要先安裝 gcc 組件

cd ~
go get github.com/roamerxv/chaincode/fabric/examples/go/demo

完成後,生成一個~/go 目錄。下面有 src 和bin 目錄。/root/go/src/github.com 目錄下有個fabric 和 roamerxv 這2個目錄。

3. chaincode 的安裝

cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.1 -p github.com/roamerxv/chaincode/fabric/examples/go/demo

由於 peer.sh 中指定了CORE_PEER_ADDRESS=peer0.org1.alcor.com:7051 ,所以,這個安裝其實是把 chaincode 文件複製到 peer0.org1.alcor.com 這臺機器的 /var/hyperledger/production/chaincodes/ 目錄下. 文件名是 demo.0.0.1.

而 /var/hyperledger/production/chaincodes/ 這個路徑是由 core.yaml 裏面的 peer.fileSystemPath 這個屬性指定的。

-w399

#同時,可以在 cli 上,通過以下命令查看 peer 上的 chaincode 信息
cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode list   --installed

-w1124

注意: 這個安裝需要在涉及到的所有 peer 上進行一遍,包括另外的組織 org2. 而且一定要用 admin用戶來安裝。
    
#進入另外3個目錄,再次安裝 chaincode 到對應的 peer 上
#這個是 安裝到 peer1.org1.alcor.com
cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.1 -p github.com/roamerxv/chaincode/fabric/examples/go/demo

    
#這個是 安裝到 peer0.org2.alcor.com
cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.1 -p github.com/roamerxv/chaincode/fabric/examples/go/demo


#這個是 安裝到 peer1.org2.alcor.com
cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.1 -p github.com/roamerxv/chaincode/fabric/examples/go/demo

    

4. chaincode 的初始化

合約安裝之後,需要且只需要進行一次初始化,只能由簽署合約的用戶進行初始化,並且所有的 peer 上的 docker 服務已經啓動。誰簽署了 chaincode,誰來進行實例化。

cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode instantiate -o orderer.alcor.com:7050 --tls true --cafile  /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem -C mychannel -n demo -v 0.0.1 -c '{"Args":["init"]}' -P "OR('Org1MSP.member','Org2MSP.member')"

第一次進行合約初始化的時候的會比較慢,因爲peer 上需要創建、啓動容器。

5. chaincode的調用

cd  /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode invoke -o orderer.alcor.com:7050  --tls true --cafile /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem  -C mychannel -n demo  -c '{"Args":["write","key1","key1value中文isabc"]}'
chaincode 的調用,可以調用任意一臺安裝了這個 chaincode 的peer。這個時候被調用的 peer 上會啓動相應的 chaincode 的 docker。

進行查詢操作時,不需要指定orderer,例如:

cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode query -C mychannel -n demo -c '{"Args":["query","key1"]}'

6. chaincode 的更新

新的合約也需要在每個peer上單獨安裝。

#安裝到peer0.org1.alcor.com
cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.2 -p github.com/roamerxv/chaincode/fabric/examples/go/demo

    
#安裝到peer1.org1.alcor.com
cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.2 -p github.com/roamerxv/chaincode/fabric/examples/go/demo
    
#安裝到peer0.org2.alcor.com
cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.2 -p github.com/roamerxv/chaincode/fabric/examples/go/demo
    
 #安裝到peer1.org2.alcor.com
cd /root/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode install  -n demo -v 0.0.2 -p github.com/roamerxv/chaincode/fabric/examples/go/demo

更新的合約不需要初始化,需要進行更新操作。

cd /home/fabric/fabric-deploy/users/[email protected]
./peer.sh chaincode upgrade -o orderer.alcor.com:7050 --tls true --cafile  /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem  -C mychannel -n demo -v 0.0.2 -c '{"Args":["init"]}' -P "OR('Org1MSP.member','Org2MSP.member')"

更新後,直接調用新合約。 調用的時候,不需要指定版本號,直接會調用最新版本的 CC

./peer.sh chaincode invoke -o orderer.alcor.com:7050  --tls true --cafile /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem  -C mychannel -n demo  -c '{"Args":["write","key1","徐澤宇&徐芷攸"]}'
./peer.sh chaincode query -C mychannel -n demo -c '{"Args":["query","key1"]}'

7. 查詢key的歷史記錄

./peer.sh chaincode query -C mychannel -n demo -c '{"Args":["history","key1"]}'

十八. java 版本的 chaincode 的安裝和部署

1. 在 cli 主機上拉取 java chaincode 的代碼(需要安裝java 和 gradle)

cd /root/fabric-chaincode-java
git clone https://github.com/hyperledger/fabric-samples.git
cd /root/fabric-chaincode-java/fabric-samples/chaincode/chaincode_example02/java
gradle build

2.安裝 chaincode

在 cli 上的 [email protected] 主機上安裝 java chaincode

cd /root/fabric/fabric-deploy/users/[email protected]

./peer.sh chaincode install -l java  -n mycc -v 1.0.0 -p /root/fabric-chaincode-java/fabric-samples/chaincode/chaincode_example02/java

#同時安裝到其他幾個 peer 上
cd /root/fabric/fabric-deploy/users/[email protected]
cd /root/fabric/fabric-deploy/users/[email protected]
cd /root/fabric/fabric-deploy/users/[email protected]

./peer.sh chaincode install -l java  -n mycc -v 1.0.0 -p /root/fabric-chaincode-java/fabric-samples/chaincode/chaincode_example02/java

3. 實例化chaincode

peer0.org.aclcor.com 主機上會產生一個 docker 容器

cd /root/fabric/fabric-deploy/users/[email protected]

./peer.sh chaincode instantiate -o orderer.alcor.com:7050 --tls true --cafile /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem -C mychannel -n mycc -v 1.0.0 -c  '{"Args":["init","roamer","100","dly","200"]}' -P "OR('Org1MSP.member','Org2MSP.member')"

4.調用 chaincode(做一筆轉賬)

cd /root/fabric/fabric-deploy/users/[email protected]

./peer.sh chaincode invoke -o orderer.alcor.com:7050  --tls true --cafile /root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/tlsca/tlsca.alcor.com-cert.pem  -C mychannel -n mycc  -c '{"Args":["invoke","roamer","dly","20"]}' 

4.查詢chaincode(查一個賬戶信息)

cd /root/fabric/fabric-deploy/users/[email protected]

./peer.sh chaincode query  -C mychannel  -n mycc -c '{"Args":["query","roamer"]}'

4.在其他幾個 peer 上進行安裝和調用(略)

踩坑

  • 下載 image : hyperledger/fabric-javaenv:amd64-1.3.0 不存在。
    解決辦法: 修改 peer 上的 core.yaml 文件中的chaincode-java-runtime 部分,直接指定
java:
    #runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)
    runtime: $(DOCKER_NS)/fabric-javaenv:1.3.0

kill 掉原來的 peer 進程,再啓動 peer 。在 cli 上重新 instance CC 。peer 節點上會自動 pull image。如果不重啓 peer,core.yaml 不會起作用,一直報同樣的錯誤。

B. Fabric explorer 的安裝和使用

hyperledger explorer(0.3.7) 安裝

C. Fabric CA的安裝和使用

參考文檔
https://www.lijiaocn.com/項目/2018/05/04/fabric-ca-example.html
https://www.lijiaocn.com/項目/2018/04/27/hyperledger-fabric-ca-usage.html

一. 在 ca.alcor.com 主機上安裝 Fabric-ca 1.3

1. 安裝 go 環境

cd /root
wget https://dl.google.com/go/go1.10.4.linux-amd64.tar.gz
tar -xvf  go1.10.4.linux-amd64.tar.gz
mv ./go  /usr/local
    
#修改 /etc/profile,增加 如下2行內容
export GOROOT=/usr/local/go
export PATH=$PATH:$GOROOT/bin
export GOPATH=/root
#使得環境變量生效
source /etc/profile
#確定 go 的安裝成功和版本信息
go version 
#查看 go 的環境
go env

2. fabirc-ca的下載和編譯

a. 通過源碼編譯的方式
yum install libtool   libtool-ltdl-devel
    
cd /root
mkdir -p /root/src/github.com/hyperledger/
cd /root/src/github.com/hyperledger/
git clone https://github.com/hyperledger/fabric-ca.git
cd /root/src/github.com/hyperledger/fabric-ca
git checkout release-1.3

make fabric-ca-server
make fabric-ca-client
ls ./bin/
# 發現有以下2個執行文件
fabric-ca-client  fabric-ca-server
    
b. 直接下載的方式(只能下載到 fabric-ca client)
cd \root
wget https://nexus.hyperledger.org/service/local/repositories/releases/content/org/hyperledger/fabric-ca/hyperledger-fabric-ca-1.3.0-stable/linux-amd64.1.3.0-stable-4f6586e/hyperledger-fabric-ca-1.3.0-stable-linux-amd64.1.3.0-stable-4f6586e.tar.gz

3.啓動 fabric server

a. 爲了支持 刪除聯盟刪除用戶的需求,用下面的方式啓動

缺省監聽端口 7054

mkdir -p /root/fabric-ca-files/server
   
fabric-ca-server start -b admin:password --cfg.affiliations.allowremove  --cfg.identities.allowremove -H /root/fabric-ca-files/server &
b. 配置成隨系統啓動 fabric-ca-server
vi /etc/init.d/autoRunFabric-ca-server.sh

在文件中加入下面內容

#!/bin/sh
#chkconfig: 2345 80 90 
#表示在2/3/4/5運行級別啓動,啓動序號(S80),關閉序號(K90); 
/usr/local/bin/fabric-ca-server start -b admin:password --cfg.affiliations.allowremove  --cfg.identities.allowremove  -H /root/fabric-ca-files/server &

配置成隨系統啓動

chmod +x /etc/init.d/autoRunFabric-ca-server.sh
chkconfig --add autoRunFabric-ca-server.sh
chkconfig autoRunFabric-ca-server.sh on

理解/root/fabric-ca-files/admin下的文件。

  • msp :包含keystore,CA服務器的私鑰
  • ca-cert.pem :CA服務端的證書
  • fabric-ca-server.db :CA默認使用的嵌入型數據庫 SQLite
  • fabric-ca-server-config.yaml :CA服務端的配置文件

4. 生成fabric ca 的管理員 (admin)證書和祕鑰的流程

a.生成fabric-ca admin的憑證,用-H參數指定client目錄:
mkdir -p /root/fabric-ca-files/admin
fabric-ca-client enroll -u http://admin:password@localhost:7054 -H /root/fabric-ca-files/admin

也可以用環境變量FABRIC_CA_CLIENT_HOME指定了client的工作目錄,生成的用戶憑證將存放在這個目錄中。

b. 查看默認的聯盟

上面的啓動方式默認會創建兩個組織:
可以通過下面命令進行查看

fabric-ca-client  -H /root/fabric-ca-files/admin  affiliation list
    
affiliation: .
   affiliation: org2
      affiliation: org2.department1
   affiliation: org1
      affiliation: org1.department1
      affiliation: org1.department2
c. 刪除聯盟
fabric-ca-client -H  /root/fabric-ca-files/admin  affiliation remove --force  org1
fabric-ca-client -H  /root/fabric-ca-files/admin  affiliation remove --force  org2
d. 創建自己定義的聯盟
fabric-ca-client  -H  /root/fabric-ca-files/admin  affiliation add com 
fabric-ca-client  -H  /root/fabric-ca-files/admin  affiliation add com.alcor
fabric-ca-client  -H  /root/fabric-ca-files/admin  affiliation add com.alcor.org1
fabric-ca-client  -H  /root/fabric-ca-files/admin  affiliation add com.alcor.org2
e. 查看剛剛建立的聯盟
fabric-ca-client  -H /root/fabric-ca-files/admin  affiliation list
f. 爲各個組織生成憑證(MSP),就是從Fabric-CA中,讀取出用來簽署用戶的根證書等
1)爲 alcor.com 獲取證書
fabric-ca-client getcacert -M /root/fabric-ca-files/Organizations/alcor.com/msp
2)爲 org1.alcor.com 獲取證書
fabric-ca-client getcacert -M /root/fabric-ca-files/Organizations/org1.alcor.com/msp
3)爲 org2.alcor.com 獲取證書
fabric-ca-client getcacert -M /root/fabric-ca-files/Organizations/org2.alcor.com/msp

這裏是用getcacert爲每個組織準備需要的ca文件,在生成創始塊的時候會用到。

在1.3.0版本的fabric-ca中,只會生成用戶在操作區塊鏈的時候用到的證書和密鑰,不會生成用來加密grpc通信的證書。

4)這裏複用之前在 cli 主機上用 cryptogen 生成的tls證書,需要將驗證tls證書的ca添加到msp目錄中,如下:
scp -r [email protected]:/root/fabric/fabric-deploy/certs/ordererOrganizations/alcor.com/msp/tlscacerts /root/fabric-ca-files/Organizations/alcor.com/msp/
scp -r [email protected]:/root/fabric/fabric-deploy/certs/peerOrganizations/org1.alcor.com/msp/tlscacerts/  /root/fabric-ca-files/Organizations/org1.alcor.com/msp/
scp -r [email protected]:/root/fabric/fabric-deploy/certs/peerOrganizations/org2.alcor.com/msp/tlscacerts/  /root/fabric-ca-files/Organizations/org2.alcor.com/msp/

如果在你的環境中,各個組件域名的證書,是由第三方CA簽署的,就將第三方CA的根證書添加到msp/tlscacerts目錄中。

組織的msp目錄中,包含都是CA根證書,分別是TLS加密的根證書,和用於身份驗證的根證書。另外還需要admin用戶的證書,後面的操作中會添加。

g. 證書查看命令
openssl x509 -in /root/fabric-ca-files/admin/msp/cacerts/localhost-7054.pem  -text
h. 註冊聯盟中的各個管理員Admin
1) 註冊alcor.com的管理員 [email protected]

·用命令行的方式進行註冊(命令行太長,用第二種方式)

fabric-ca-client register -H /root/fabric-ca-files/admin \
    --id.name [email protected]  \
    --id.type client  \
    --id.
    --id.affiliation "com.alcor"  \
    --id.attrs  \
        '"hf.Registrar.Roles=client,orderer,peer,user",\
        "hf.Registrar.DelegateRoles=client,orderer,peer,user",\
        "hf.Registrar.Attributes=*",\
        "hf.GenCRL=true",\
        "hf.Revoker=true",\
        "hf.AffiliationMgr=true",\
        "hf.IntermediateCA=true",\
        "role=admin:ecert"'
    

使用配置文件的方式進行註冊(主要的使用方法)

  1. 修改 /root/fabric-ca-files/admin/fabric-ca-client-config.yaml 中的 id 部分

    vim /root/fabric-ca-files/admin/fabric-ca-client-config.yaml
    

    修改內容爲

    id:
      name: [email protected]
      type: client
      affiliation: com.alcor
      maxenrollments: 0
      attributes:
        - name: hf.Registrar.Roles
          value: client,orderer,peer,user
        - name: hf.Registrar.DelegateRoles
          value: client,orderer,peer,user
        - name: hf.Registrar.Attributes
          value: "*"
        - name: hf.GenCRL
          value: true
        - name: hf.Revoker
          value: true
        - name: hf.AffiliationMgr
          value: true
        - name: hf.IntermediateCA
          value: true
        - name: role
          value: admin
          ecert: true
    

    注意最後一行role屬性,是我們自定義的屬性,對於自定義的屬性,要設置certs,在配置文件中需要單獨設置ecert屬性爲true或者false。如果在命令行中,添加後綴:ecert表示true.
    其它配置的含義是用戶名爲[email protected],類型是client,它能夠管理com.alcor.*下的用戶,如下:

    --id.name  [email protected]                           //用戶名
    --id.type client                                       //類型爲client
    --id.affiliation "com.alcor"                         //權利訪問
    hf.Registrar.Roles=client,orderer,peer,user            //能夠管理的用戶類型
    hf.Registrar.DelegateRoles=client,orderer,peer,user    //可以授權給子用戶管理的用戶類型
    hf.Registrar.Attributes=*                              //可以爲子用戶設置所有屬性
    hf.GenCRL=true                                         //可以生成撤銷證書列表
    hf.Revoker=true                                        //可以撤銷用戶
    hf.AffiliationMgr=true                                 //能夠管理聯盟
    hf.IntermediateCA=true                                 //可以作爲中間CA
    role=admin:ecert                                       //自定義屬性
    

    所有hr 開頭的屬性,非常重要,是 fabric ca 的內置屬性。具體內容可以查看 官方文檔的描述。https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html

  2. 修改完成後,用如下命令註冊用戶

    fabric-ca-client register -H /root/fabric-ca-files/admin --id.secret=password
    

    如果不用 --id.secret指定密碼,會自動生成密碼

  3. 註冊完成之後,還需要對這個用戶生成憑證。

    a. 用 命令來確定,剛纔註冊的用戶已經成功生成.

    fabric-ca-client identity  list  -H /root/fabric-ca-files/admin    
    

    可以查看當前的用戶列表,以及每個用戶的詳細信息。

    b. 生成憑證

    fabric-ca-client enroll -u http://[email protected]:password@localhost:7054  -H /root/fabric-ca-files/Organizations/alcor.com/admin
    

    -H 參數指定[email protected] 的用戶憑證的存放目錄。在這個目錄下參數了這樣的目錄和文件
    -w699

    c. 這時候可以用[email protected]的身份查看聯盟信息:

    fabric-ca-client affiliation list -H /root/fabric-ca-files/Organizations/alcor.com/admin
    
    #顯示結果
    affiliation: com
       affiliation: com.alcor
          affiliation: com.alcor.org1
          affiliation: com.alcor.org2
    
  4. 如果是管理員權限,還需要複製到/msp/admincerts/目錄下。
    最後將[email protected]的證書複製到alcor.com/msp/admincerts/中, 只有這樣,才能具備管理員權限。

    mkdir /root/fabric-ca-files/Organizations/alcor.com/msp/admincerts/
    cp /root/fabric-ca-files/Organizations/alcor.com/admin/msp/signcerts/cert.pem  /root/fabric-ca-files/Organizations/alcor.com/msp/admincerts/
    
2) 註冊org1.alcor.com的管理員 [email protected]
  1. 修改 /root/fabric-ca-files/admin/fabric-ca-client-config.yaml 中的 id 部分。
    可以使用其他的fabric-ca-client-config.yaml文件,沒有必須使用這個ca 的 admin 下面的fabric-ca-client-config.yaml文件的必然要求

    vim /root/fabric-ca-files/admin/fabric-ca-client-config.yaml
    

    修改內容爲

    id:
      name: [email protected]
      type: client
      affiliation: com.alcor.org1
      maxenrollments: 0
      attributes:
        - name: hf.Registrar.Roles
          value: client,orderer,peer,user
        - name: hf.Registrar.DelegateRoles
          value: client,orderer,peer,user
        - name: hf.Registrar.Attributes
          value: "*"
        - name: hf.GenCRL
          value: true
        - name: hf.Revoker
          value: true
        - name: hf.AffiliationMgr
          value: true
        - name: hf.IntermediateCA
          value: true
        - name: role
          value: admin
          ecert: true
    
  2. 修改註冊[email protected] 用戶

    fabric-ca-client register -H /root/fabric-ca-files/admin --id.secret=password   
    
  3. 生成憑證

    fabric-ca-client enroll -u http://[email protected]:password@localhost:7054  -H /root/fabric-ca-files/Organizations/org1.alcor.com/admin
    
  4. 用這個憑證查看聯盟

    fabric-ca-client affiliation list -H /root/fabric-ca-files/Organizations/org1.alcor.com/admin
    

    注意:
    這個時候,只能看見 org1.alcor.com 的聯盟信息。和 [email protected] 的權限是不同的

  5. 把憑證複製到 org1.alcor.com的msp/admincerts 目錄下

    mkdir -p /root/fabric-ca-files/Organizations/org1.alcor.com/msp/admincerts
    cp /root/fabric-ca-files/Organizations/org1.alcor.com/admin/msp/signcerts/cert.pem  /root/fabric-ca-files/Organizations/org1.alcor.com/msp/admincerts/
    
3) 註冊org2.alcor.com的管理員 [email protected]
  1. 修改 /root/fabric-ca-files/admin/fabric-ca-client-config.yaml 中的 id 部分。
    可以使用其他的fabric-ca-client-config.yaml文件,沒有必須使用這個ca 的 admin 下面的fabric-ca-client-config.yaml文件的必然要求

    vim /root/fabric-ca-files/admin/fabric-ca-client-config.yaml
    

    修改內容爲

    id:
      name: [email protected]
      type: client
      affiliation: com.alcor.org2
      maxenrollments: 0
      attributes:
        - name: hf.Registrar.Roles
          value: client,orderer,peer,user
        - name: hf.Registrar.DelegateRoles
          value: client,orderer,peer,user
        - name: hf.Registrar.Attributes
          value: "*"
        - name: hf.GenCRL
          value: true
        - name: hf.Revoker
          value: true
        - name: hf.AffiliationMgr
          value: true
        - name: hf.IntermediateCA
          value: true
        - name: role
          value: admin
          ecert: true
    
  2. 修改註冊[email protected] 用戶

    fabric-ca-client register -H /root/fabric-ca-files/admin --id.secret=password   
    
  3. 生成憑證

    fabric-ca-client enroll -u http://[email protected]:password@localhost:7054  -H /root/fabric-ca-files/Organizations/org2.alcor.com/admin
    
  4. 用這個憑證查看聯盟

    fabric-ca-client affiliation list -H /root/fabric-ca-files/Organizations/org2.alcor.com/admin
    

    注意:
    這個時候,只能看見 org2.alcor.com 的聯盟信息。和 [email protected] , [email protected] 的權限是不同的

  5. 把憑證複製到 org2.alcor.com的msp/admincerts 目錄下

    mkdir -p /root/fabric-ca-files/Organizations/org2.alcor.com/msp/admincerts
    cp /root/fabric-ca-files/Organizations/org2.alcor.com/admin/msp/signcerts/cert.pem  /root/fabric-ca-files/Organizations/org2.alcor.com/msp/admincerts/
    
i. 使用各個組織中的 Admin 來創建其他賬號
1). 用 [email protected] 來創建 orderer.alcor.com 的賬號
  1. 修改 /root/fabric-ca-files/Organizations/alcor.com/admin/fabric-ca-client-config.yaml文件的配置

    vi /root/fabric-ca-files/Organizations/alcor.com/admin/fabric-ca-client-config.yaml
    

    配置 id 的部分 用於[email protected]

    id:
      name: orderer.alcor.com
      type: orderer
      affiliation: com.alcor
      maxenrollments: 0
      attributes:
        - name: role
          value: orderer
          ecert: true
    
  2. 註冊 [email protected] 的用戶

    fabric-ca-client register -H /root/fabric-ca-files/Organizations/alcor.com/admin --id.secret=password
    
  3. 生成證書文件

    fabric-ca-client enroll -u http://orderer.alcor.com:password@localhost:7054 -H /root/fabric-ca-files/Organizations/alcor.com/orderer
    
  4. [email protected]的證書複製到orderer 的admincerts下

    # 建立 orderer 下的 admincerts 目錄
    mkdir /root/fabric-ca-files/Organizations/alcor.com/orderer/msp/admincerts
    # 複製 [email protected] 的證書到  orderer 的 msp/admincerts 目錄下
    cp /root/fabric-ca-files/Organizations/alcor.com/admin/msp/signcerts/cert.pem /root/fabric-ca-files/Organizations/alcor.com/orderer/msp/admincerts/
    
    

注意:
爲什麼要這麼做?!!!

2). 用 [email protected] 來創建 peer0.org1.alcor.com 的賬號
  1. 修改 /root/fabric-ca-files/Organizations/org1.alcor.com/admin/fabric-ca-client-config.yaml文件的配置

    vi /root/fabric-ca-files/Organizations/org1.alcor.com/admin/fabric-ca-client-config.yaml
    

    配置 id 的部分 用於[email protected]

    id:
      name: peer0.org1.alcor.com
      type: peer
      affiliation: com.alcor.org1
      maxenrollments: 0
      attributes:
        - name: role
          value: peer
          ecert: true
    
  2. 註冊 [email protected] 的用戶

    fabric-ca-client register -H /root/fabric-ca-files/Organizations/org1.alcor.com/admin --id.secret=password
    
  3. 生成證書文件

    fabric-ca-client enroll -u http://peer0.org1.alcor.com:password@localhost:7054 -H /root/fabric-ca-files/Organizations/org1.alcor.com/peer0
    
  4. [email protected]的證書複製到 org1\peer0 的admincerts下

    # 建立 peer0 下的 admincerts 目錄
    mkdir /root/fabric-ca-files/Organizations/org1.alcor.com/peer0/msp/admincerts
    # 複製 [email protected] 的證書到  peer0 的 msp/admincerts 目錄下
    cp /root/fabric-ca-files/Organizations/org1.alcor.com/admin/msp/signcerts/cert.pem /root/fabric-ca-files/Organizations/org1.alcor.com/peer0/msp/admincerts/
    
    
3). 用 [email protected] 來創建 peer1.org1.alcor.com 的賬號
  1. 修改 /root/fabric-ca-files/Organizations/org1.alcor.com/admin/fabric-ca-client-config.yaml文件的配置

    vi /root/fabric-ca-files/Organizations/org1.alcor.com/admin/fabric-ca-client-config.yaml
    

    配置 id 的部分 用於[email protected]

    id:
      name: peer1.org1.alcor.com
      type: peer
      affiliation: com.alcor.org1
      maxenrollments: 0
      attributes:
        - name: role
          value: peer
          ecert: true
    
  2. 註冊 [email protected] 的用戶

    fabric-ca-client register -H /root/fabric-ca-files/Organizations/org1.alcor.com/admin --id.secret=password
    
  3. 生成證書文件

    fabric-ca-client enroll -u http://peer1.org1.alcor.com:password@localhost:7054 -H /root/fabric-ca-files/Organizations/org1.alcor.com/peer1
    
  4. [email protected]的證書複製到 org1\peer1 的admincerts下

    # 建立 peer1 下的 admincerts 目錄
    mkdir /root/fabric-ca-files/Organizations/org1.alcor.com/peer1/msp/admincerts
    # 複製 [email protected] 的證書到  peer1 的 msp/admincerts 目錄下
    cp /root/fabric-ca-files/Organizations/org1.alcor.com/admin/msp/signcerts/cert.pem /root/fabric-ca-files/Organizations/org1.alcor.com/peer1/msp/admincerts/
    
    
4). 用 [email protected] 來創建 peer0.org2.alcor.com 的賬號
  1. 修改 /root/fabric-ca-files/Organizations/org2.alcor.com/admin/fabric-ca-client-config.yaml文件的配置

    vi /root/fabric-ca-files/Organizations/org2.alcor.com/admin/fabric-ca-client-config.yaml
    

    配置 id 的部分 用於[email protected]

    id:
      name: peer0.org2.alcor.com
      type: peer
      affiliation: com.alcor.org2
      maxenrollments: 0
      attributes:
        - name: role
          value: peer
          ecert: true
    
  2. 註冊 [email protected] 的用戶

    fabric-ca-client register -H /root/fabric-ca-files/Organizations/org2.alcor.com/admin --id.secret=password
    
  3. 生成證書文件

    fabric-ca-client enroll -u http://peer0.org2.alcor.com:password@localhost:7054 -H /root/fabric-ca-files/Organizations/org2.alcor.com/peer0
    
  4. [email protected]的證書複製到 org2\peer0 的admincerts下

    # 建立 peer0 下的 admincerts 目錄
    mkdir /root/fabric-ca-files/Organizations/org2.alcor.com/peer0/msp/admincerts
    # 複製 [email protected] 的證書到  peer0 的 msp/admincerts 目錄下
    cp /root/fabric-ca-files/Organizations/org2.alcor.com/admin/msp/signcerts/cert.pem /root/fabric-ca-files/Organizations/org2.alcor.com/peer0/msp/admincerts/
    
    
5). 用 [email protected] 來創建 peer1.org2.alcor.com 的賬號
  1. 修改 /root/fabric-ca-files/Organizations/org2.alcor.com/admin/fabric-ca-client-config.yaml文件的配置

    vi /root/fabric-ca-files/Organizations/org2.alcor.com/admin/fabric-ca-client-config.yaml
    

    配置 id 的部分 用於[email protected]

    id:
      name: peer1.org2.alcor.com
      type: peer
      affiliation: com.alcor.org2
      maxenrollments: 0
      attributes:
        - name: role
          value: peer
          ecert: true
    
  2. 註冊 [email protected] 的用戶

    fabric-ca-client register -H /root/fabric-ca-files/Organizations/org2.alcor.com/admin --id.secret=password
    
  3. 生成證書文件

    fabric-ca-client enroll -u http://peer1.org2.alcor.com:password@localhost:7054 -H /root/fabric-ca-files/Organizations/org2.alcor.com/peer1
    
  4. [email protected]的證書複製到 org2\pee1 的admincerts下

    # 建立 peer1 下的 admincerts 目錄
    mkdir /root/fabric-ca-files/Organizations/org2.alcor.com/peer1/msp/admincerts
    # 複製 [email protected] 的證書到  peer1 的 msp/admincerts 目錄下
    cp /root/fabric-ca-files/Organizations/org2.alcor.com/admin/msp/signcerts/cert.pem /root/fabric-ca-files/Organizations/org2.alcor.com/peer1/msp/admincerts/
    
    

D. 利用Fabric CA頒發的證書,部署 Fabric系統

一. 先把fabric ca 生成的整個目錄複製到 cli 主機的fabric-deploy/certs 目錄下

scp -r /root/fabric-ca-files/* cli.alcor.com:/root/fabric/fabric-deploy/certs_by_ca

二.進入 cli 主機,進行後續操作

三.配置genesisblock 和 orderer

1.生成crypto-config.yaml 文件

vim /root/fabric/fabric-deploy/crypto-config.yaml
OrdererOrgs:
  - Name: Orderer
    Domain: alcor.com
    Specs:
      - Hostname: orderer
PeerOrgs:
  - Name: Org1
    Domain: org1.alcor.com
    Template:
      Count: 2
    Users:
      Count: 2
  - Name: Org2
    Domain: org2.alcor.com
    Template:
      Count: 2
    Users:
      Count: 2

2.用cryptogen生成配置文件。(主要是獲取tls的祕鑰文件)

cd /root/fabric/fabric-deploy
./bin/cryptogen generate --config=crypto-config.yaml --output ./certs_by_crypto

3.配置orderer .(詳細說明見A 章節),下面只整理命令.

cd /root/fabric/fabric-deploy
mkdir orderer.alcor.com
cp ./bin/orderer ./orderer.alcor.com
#複製 tls 目錄(crypto生成的,fabric-ca 沒法生成)
cp -rf ./certs_by_crypto/ordererOrganizations/alcor.com/orderers/orderer.alcor.com/tls ./orderer.alcor.com/
#複製 msp 目錄(fabric-ca 來生成的)
cp -rf  ./certs_by_ca/Organizations/alcor.com/orderer/msp orderer.alcor.com/
vi /root/fabric/fabric-deploy/orderer.alcor.com/orderer.yaml
General:
    LedgerType: file
    ListenAddress: 0.0.0.0
    ListenPort: 7050
    TLS:
        Enabled: true
        PrivateKey: ./tls/server.key
        Certificate: ./tls/server.crt
        RootCAs:
          - ./tls/ca.crt
#        ClientAuthEnabled: false
#        ClientRootCAs:
    LogLevel: debug
    LogFormat: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
#    GenesisMethod: provisional
    GenesisMethod: file
    GenesisProfile: SampleInsecureSolo
    GenesisFile: ./genesisblock
    LocalMSPDir: ./msp
    LocalMSPID: OrdererMSP
    Profile:
        Enabled: false
        Address: 0.0.0.0:6060
    BCCSP:
        Default: SW
        SW:
            Hash: SHA2
            Security: 256
            FileKeyStore:
                KeyStore:
FileLedger:
    Location:  /opt/fabric/orderer/data
    Prefix: hyperledger-fabric-ordererledger
RAMLedger:
    HistorySize: 1000
Kafka:
    Retry:
        ShortInterval: 5s
        ShortTotal: 10m
        LongInterval: 5m
        LongTotal: 12h
        NetworkTimeouts:
            DialTimeout: 10s
            ReadTimeout: 10s
            WriteTimeout: 10s
        Metadata:
            RetryBackoff: 250ms
            RetryMax: 3
        Producer:
            RetryBackoff: 100ms
            RetryMax: 3
        Consumer:
            RetryBackoff: 2s
    Verbose: false
    TLS:
      Enabled: false
      PrivateKey:
        #File: path/to/PrivateKey
      Certificate:
        #File: path/to/Certificate
      RootCAs:
        #File: path/to/RootCAs
    Version:
vi  /root/fabric/fabric-deploy/orderer.alcor.com/startOrderer.sh
#!/bin/bash
cd /opt/fabric/orderer
./orderer 2>&1 |tee log
chmod +x  /root/fabric/fabric-deploy/orderer.alcor.com/startOrderer.sh

4.配置 peer0.org1.alcor.com .(詳細說明見A 章節),下面只整理命令.

mkdir -p  /root/fabric/fabric-deploy/peer0.org1.alcor.com
cd /root/fabric/fabric-deploy
cp bin/peer peer0.org1.alcor.com/
#複製 tls 目錄(crypto生成的,fabric-ca 沒法生成)
cp -rf  ./certs_by_crypto/peerOrganizations/org1.alcor.com/peers/peer0.org1.alcor.com/tls ./peer0.org1.alcor.com/
#複製 msp 目錄(fabric-ca 來生成的)
cp -rf ./certs_by_ca/Organizations/org1.alcor.com/peer0/msp ./peer0.org1.alcor.com/
vi /root/fabric/fabric-deploy/peer0.org1.alcor.com/core.yaml
logging:
    level:      info
    cauthdsl:   warning
    gossip:     warning
    grpc:       error
    ledger:     info
    msp:        warning
    policies:   warning
    peer:
        gossip: warning
    
    format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
    
peer:
    
    id: peer0.org1.alcor.com
    
    networkId: dev
    
    listenAddress: 0.0.0.0:7051
    
    address: 0.0.0.0:7051
    
    addressAutoDetect: false
    
    gomaxprocs: -1
    
    keepalive:
        minInterval: 60s
        client:
            interval: 60s
            timeout: 20s
        deliveryClient:
            interval: 60s
            timeout: 20s
    
    gossip:
        bootstrap: peer0.org1.alcor.com:7051
    
        useLeaderElection: true
        orgLeader: false
    
        endpoint:
        maxBlockCountToStore: 100
        maxPropagationBurstLatency: 10ms
        maxPropagationBurstSize: 10
        propagateIterations: 1
        propagatePeerNum: 3
        pullInterval: 4s
        pullPeerNum: 3
        requestStateInfoInterval: 4s
        publishStateInfoInterval: 4s
        stateInfoRetentionInterval:
        publishCertPeriod: 10s
        skipBlockVerification: false
        dialTimeout: 3s
        connTimeout: 2s
        recvBuffSize: 20
        sendBuffSize: 200
        digestWaitTime: 1s
        requestWaitTime: 1500ms
        responseWaitTime: 2s
        aliveTimeInterval: 5s
        aliveExpirationTimeout: 25s
        reconnectInterval: 25s
        externalEndpoint:
        election:
            startupGracePeriod: 15s
            membershipSampleInterval: 1s
            leaderAliveThreshold: 10s
            leaderElectionDuration: 5s
        pvtData:
            pullRetryThreshold: 60s
            transientstoreMaxBlockRetention: 1000
            pushAckTimeout: 3s
            btlPullMargin: 10
            reconcileBatchSize: 10
            reconcileSleepInterval: 5m
    
    tls:
        enabled:  true
        clientAuthRequired: false
        cert:
            file: tls/server.crt
        key:
            file: tls/server.key
        rootcert:
            file: tls/ca.crt
        clientRootCAs:
            files:
              - tls/ca.crt
        clientKey:
            file:
        clientCert:
            file:
    
    authentication:
        timewindow: 15m
    
    fileSystemPath: /var/hyperledger/production
    
    BCCSP:
        Default: SW
        SW:
            Hash: SHA2
            Security: 256
            FileKeyStore:
                KeyStore:
        PKCS11:
            Library:
            Label:
            Pin:
            Hash:
            Security:
            FileKeyStore:
                KeyStore:
    
    mspConfigPath: msp
    
    localMspId: Org1MSP
    
    client:
        connTimeout: 3s
    
    deliveryclient:
        reconnectTotalTimeThreshold: 3600s
    
        connTimeout: 3s
    
        reConnectBackoffThreshold: 3600s
    
    localMspType: bccsp
    
    profile:
        enabled:     false
        listenAddress: 0.0.0.0:6060
    adminService:
    handlers:
        authFilters:
          -
            name: DefaultAuth
          -
            name: ExpirationCheck    # This filter checks identity x509 certificate expiration
        decorators:
          -
            name: DefaultDecorator
        endorsers:
          escc:
            name: DefaultEndorsement
            library:
        validators:
          vscc:
            name: DefaultValidation
            library:
    validatorPoolSize:
    discovery:
        enabled: true
        authCacheEnabled: true
        authCacheMaxSize: 1000
        authCachePurgeRetentionRatio: 0.75
        orgMembersAllowedAccess: false
    
vm:
    endpoint: unix:///var/run/docker.sock
    docker:
        tls:
            enabled: false
            ca:
                file: docker/ca.crt
            cert:
                file: docker/tls.crt
            key:
                file: docker/tls.key
        attachStdout: false
        hostConfig:
            NetworkMode: host
            Dns:
            LogConfig:
                Type: json-file
                Config:
                    max-size: "50m"
                    max-file: "5"
            Memory: 2147483648
    
    
chaincode:
    id:
        path:
        name:
    
    builder: $(DOCKER_NS)/fabric-ccenv:latest
    pull: false
    
    golang:
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)
        dynamicLink: false
    
    car:
        runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)
    
    java:
        runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)
    
    node:
        runtime: $(BASE_DOCKER_NS)/fabric-baseimage:$(ARCH)-$(BASE_VERSION)
    startuptimeout: 300s
    
    executetimeout: 30s
    mode: net
    keepalive: 0
    system:
        +lifecycle: enable
        cscc: enable
        lscc: enable
        escc: enable
        vscc: enable
        qscc: enable
    systemPlugins:
    logging:
      level:  info
      shim:   warning
      format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
    
    
ledger:
    
  blockchain:
    
  state:
    stateDatabase: CouchDB     #goleveldb
    totalQueryLimit: 100000
    couchDBConfig:
       couchDBAddress: 127.0.0.1:5984
       username:    admin
       password:    password
       maxRetries: 3
       maxRetriesOnStartup: 10
       requestTimeout: 35s
       internalQueryLimit: 1000
       maxBatchUpdateSize: 1000
       warmIndexesAfterNBlocks: 1
       createGlobalChangesDB: false
    
  history:
    enableHistoryDatabase: true
    
    
metrics:
    enabled: false
    reporter: statsd
    interval: 1s
    statsdReporter:
          address: 0.0.0.0:8125
          flushInterval: 2s
          flushBytes: 1432
    promReporter:
          listenAddress: 0.0.0.0:8080
vi  /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh
#!/bin/bash
cd /opt/fabric/peer
./peer node start 2>&1 |tee log
chmod +x /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh

5.配置 peer1.org1.alcor.com .(詳細說明見A 章節),下面只整理命令.

cd /root/fabric/fabric-deploy
mkdir -p /root/fabric/fabric-deploy/peer1.org1.alcor.com
cp bin/peer     peer1.org1.alcor.com/
#複製 tls 目錄(crypto生成的,fabric-ca 沒法生成)
cp -rf  ./certs_by_crypto/peerOrganizations/org1.alcor.com/peers/peer1.org1.alcor.com/tls ./peer1.org1.alcor.com/
#複製 msp 目錄(fabric-ca 來生成的)
cp -rf ./certs_by_ca/Organizations/org1.alcor.com/peer1/msp ./peer1.org1.alcor.com/

cp peer0.org1.alcor.com/core.yaml  peer1.org1.alcor.com

sed -i "s/peer0.org1.alcor.com/peer1.org1.alcor.com/g" peer1.org1.alcor.com/core.yaml

cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh  /root/fabric/fabric-deploy/peer1.org1.alcor.com/

6.配置 peer0.org2.alcor.com .(詳細說明見A 章節),下面只整理命令.

cd /root/fabric/fabric-deploy
mkdir -p /root/fabric/fabric-deploy/peer0.org2.alcor.com
cp bin/peer     ./peer0.org2.alcor.com/
#複製 tls 目錄(crypto生成的,fabric-ca 沒法生成)
cp -rf  ./certs_by_crypto/peerOrganizations/org2.alcor.com/peers/peer0.org2.alcor.com/tls ./peer0.org2.alcor.com/
#複製 msp 目錄(fabric-ca 來生成的)
cp -rf ./certs_by_ca/Organizations/org2.alcor.com/peer0/msp ./peer0.org2.alcor.com/

cp peer0.org1.alcor.com/core.yaml  peer0.org2.alcor.com

sed -i "s/peer0.org1.alcor.com/peer0.org2.alcor.com/g" peer0.org2.alcor.com/core.yaml
sed -i "s/Org1MSP/Org2MSP/g" peer0.org2.alcor.com/core.yaml    

cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh  peer0.org2.alcor.com/

7.配置 peer1.org2.alcor.com .(詳細說明見A 章節),下面只整理命令.

cd /root/fabric/fabric-deploy
mkdir -p /root/fabric/fabric-deploy/peer1.org2.alcor.com
cp bin/peer     ./peer1.org2.alcor.com/
#複製 tls 目錄(crypto生成的,fabric-ca 沒法生成)
cp -rf  ./certs_by_crypto/peerOrganizations/org2.alcor.com/peers/peer1.org2.alcor.com/tls ./peer1.org2.alcor.com/
#複製 msp 目錄(fabric-ca 來生成的)
cp -rf ./certs_by_ca/Organizations/org2.alcor.com/peer1/msp ./peer1.org2.alcor.com/

cp peer0.org1.alcor.com/core.yaml  peer1.org2.alcor.com

sed -i "s/peer0.org1.alcor.com/peer1.org2.alcor.com/g" peer1.org2.alcor.com/core.yaml
sed -i "s/Org1MSP/Org2MSP/g" peer1.org2.alcor.com/core.yaml    

cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/startPeer.sh  peer1.org2.alcor.com/

8.複製到各個節點機器上

注意:爲了避免各種問題,先清除 orderer 和 peer 節點上的目錄

cd /root/fabric/fabric-deploy
scp -r orderer.alcor.com/* [email protected]:/opt/fabric/orderer/
scp -r peer0.org1.alcor.com/* [email protected]:/opt/fabric/peer/
scp -r peer1.org1.alcor.com/* [email protected]:/opt/fabric/peer/
scp -r peer0.org2.alcor.com/* [email protected]:/opt/fabric/peer/
scp -r peer1.org2.alcor.com/* [email protected]:/opt/fabric/peer/

9.配置configtx.yaml文件

主要是在原有的文件上修改MSP 文件的路徑

vim /root/fabric/fabric-deploy/configtx.yaml

#文件內容如下:
Organizations:
    - &OrdererOrg
        Name: OrdererOrg
        ID: OrdererMSP
        MSPDir: ./certs_by_ca/Organizations/alcor.com/orderer/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('OrdererMSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('OrdererMSP.admin')"
    - &Org1
        Name: Org1MSP
        ID: Org1MSP
        MSPDir: ./certs_by_ca/Organizations/org1.alcor.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org1MSP.admin', 'Org1MSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('Org1MSP.admin', 'Org1MSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('Org1MSP.admin')"
        AnchorPeers:
            - Host: peer0.org1.alcor.com
              Port: 7051
    - &Org2
        Name: Org2MSP
        ID: Org2MSP
        MSPDir: ./certs_by_ca/Organizations/org2.alcor.com/msp
        Policies:
            Readers:
                Type: Signature
                Rule: "OR('Org2MSP.admin', 'Org2MSP.member')"
            Writers:
                Type: Signature
                Rule: "OR('Org2MSP.admin', 'Org2MSP.member')"
            Admins:
                Type: Signature
                Rule: "OR('Org2MSP.admin')"
        AnchorPeers:
            - Host: peer0.org2.alcor.com
              Port: 7051
    
Capabilities:
    Channel: &ChannelCapabilities
        V1_3: true
    Orderer: &OrdererCapabilities
        V1_1: true
    Application: &ApplicationCapabilities
        V1_3: true
        V1_2: false
        V1_1: false
    
Application: &ApplicationDefaults
    Organizations:
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
    Capabilities:
        <<: *ApplicationCapabilities    
    
Orderer: &OrdererDefaults
    OrdererType: kafka
    Addresses:
        - orderer.alcor.com:7050
    BatchTimeout: 2s
    BatchSize:
        MaxMessageCount: 10
        AbsoluteMaxBytes: 99 MB
        PreferredMaxBytes: 512 KB
    Kafka:
        Brokers:
            - kafka.alcor.com:9092       # 可以填入多個kafka節點的地址
            - kafka.alcor.com:9093
            - kafka.alcor.com:9094
    Organizations:
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
        BlockValidation:
            Type: ImplicitMeta
            Rule: "ANY Writers"
    Capabilities:
        <<: *OrdererCapabilities
    
Channel: &ChannelDefaults
    Policies:
        Readers:
            Type: ImplicitMeta
            Rule: "ANY Readers"
        Writers:
            Type: ImplicitMeta
            Rule: "ANY Writers"
        Admins:
            Type: ImplicitMeta
            Rule: "MAJORITY Admins"
    Capabilities:
        <<: *ChannelCapabilities
    
Profiles:
    TwoOrgsOrdererGenesis:
        <<: *ChannelDefaults
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:
                    - *Org1
                    - *Org2
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2

10.生成genesisblock ,並且複製到 orderer 主機

cd /root/fabric/fabric-deploy
./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./genesisblock -channelID genesis
scp ./genesisblock  [email protected]:/opt/fabric/orderer

11.按照正常流程啓動 orderer

進入orderer 主機

/etc/init.d/autoRunOrderer.sh

11.按照正常流程啓動 peer

進入peer 主機

/etc/init.d/autoRunPeer.sh

12.在 cli 主機上建立 users 目錄,進行部署

a).構建[email protected] 的用戶目錄
cd /root/fabric/fabric-deploy/users
mkdir [email protected]
cd  [email protected]

cp -rf  /root/fabric/fabric-deploy/certs_by_crypto/peerOrganizations/org1.alcor.com/users/[email protected]/tls  /root/fabric/fabric-deploy/users/[email protected]/

cp -rf /root/fabric/fabric-deploy/certs_by_crypto/peerOrganizations/org1.alcor.com/users/[email protected]/msp  /root/fabric/fabric-deploy/users/[email protected]/

cp /root/fabric/fabric-deploy/peer0.org1.alcor.com/core.yaml  /root/fabric/fabric-deploy/users/[email protected]/

vim /root/fabric/fabric-deploy/users/[email protected]/peer.sh
chmod  +x /root/fabric/fabric-deploy/users/[email protected]/peer.sh
./peer.sh node status

E. 一些常用的Fabric 命令

一. Fabric CA 部分

1. 查看證書信息

通過 openssh 命令來查看證書信息

openssl x509 -in  /root/fabric-ca-files/Organizations/alcor.com/msp/admincerts/cert.pem  -text

2. 查看identity 的命令

fabric-ca-client identity  list  -H /root/fabric-ca-files/admin

3. 刪除identity 的命令

fabric-ca-client  identity remove [email protected] -H /root/fabric-ca-files/admin

4. 查詢 創世區塊的命令

configtxgen -inspectBlock genesisblock | jq

把查詢信息轉換成 json。需要安裝 jq

三. 未完!待續…

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章