Hyperledger Fabric 手动部署
背景
官网的部署是做过很多封装的,即使部署成功,对于里面的组件也不是很清楚,这次会在三台虚拟机上手动一部分一部分的搭建,OS 是centos7, 由于我是在公司搭建的,公司会对IP进行检查,所以我用了NAT模式,只要注意端口冲突就好了。先搭建简单的fabric链来熟悉,整体架构如下。
安装前准备
Virtual Box: https://www.virtualbox.org/wiki/Downloads
Centos7: https://www.centos.org/download/
putty:SSH工具, https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
FileZila: FTP工具,https://filezilla-project.org/download.php?type=client
NAT的配置:
Org1 Order & Peer0
Org1 Peer1
Org2 Peer0
centos 7 内部网络配置,三台server都要这样:
基础网络配置
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
然后将 ONBOOT=yes
service network restart
verify: ping 10.222.48.152
vi /etc/hosts , 尾部添加下面域名IP映射,ping下域名可以verify,整体架构图中的几个组件在fabric网络是以以下域名作为标识。
//
10.222.48.152 orderer.example.com
10.222.48.152 peer0.org1.example.com
10.222.48.152 peer1.org1.example.com
10.222.48.152 peer0.org2.example.com
//
防火墙,所有的上面NAT提到的端口到要在对应的server去开,记得开的是Guest Port
firewall-cmd --zone=public --add-port=22/tcp --permanent
firewall-cmd --reload
verify:
在宿主机上面(telnet 10.222.48.152 NAT配置的Host Port)
如果你想改hostname:
hostname set-hostname orderer
安装Docker
yum install -y docker
vi /etc/sysconfig/docker-storage,用下面内容去覆盖的文件
//
DOCKER_STORAGE_OPTIONS="--storage-driver devicemapper "
//
systemctl docker start
systemctl start docker
安装fabric基本组件
docker pull hyperledger/fabric-javaenv:x86_64-1.1.0
docker pull hyperledger/fabric-ccenv:x86_64-1.1.0
docker pull hyperledger/fabric-baseos:x86_64-0.4.6
yum install -y wget
mkdir /root/fabric-deploy
cd /root/fabric-deploy
下载fabric网络辅助的工具
wget https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/linux-amd64-1.1.0/hyperledger-fabric-linux-amd64-1.1.0.tar.gz
tar -xvf hyperledger-fabric-linux-amd64-1.1.0.tar.gz
在宿主机(我是Windows),有时候端口会被Virtual Box启动的server给占用,可以用这两条命令知道是谁占用,
要不就先stop掉虚拟server再重启,要不就改后面NAT forward配置里面的端口,对于NAT forward的端口细节后面会讲到
netstat -aon|findstr "端口"
tasklist|findstr "PID"
开始安装
首先了解下辅助工具
ll /root/fabric-deploy/bin
configtxgen:链配置的生成工具,如创世块,channel配置
configtxlator
cryptogen:证书相关的工具,generate 证书
get-docker-images.sh: 获取docker image,这里没用
orderer: 跑order命令,如order启动
peer:跑peer命令,如peer启动,状态查询
接下来是配置文件 ls /root/fabric-deploy/config/
这三个文件都是模板文件,所有配置都有解释,在后面深入时会有用
configtx.yaml: 生成创世块时候会用到,里面会有channel的配置
core.yaml: Peer 配置
orderer.yaml: order配置
大致理解之后就开始:
先生成证书,这里是用cryptogen这个工具生成:
cd /root/fabric-deploy
vi crypto-config.yaml , 内容如下
//
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2 #自动对应peer0,peer1
Users:
Count: 1
- Name: Org2
Domain: org2.example.com
Template:
Count: 1 #peer0
Users:
Count: 1
//
./bin/cryptogen generate --config=crypto-config.yaml --output ./certs
yum install -y tree
证书结构
对于证书理解,对troubleshot有很大帮助
下面命令可以看到一个orderer.example.com证书的结构
tree -A certs/ordererOrganizations/example.com/orderers/orderer.example.com/
|-- msp
| |-- admincerts :管理员权限的证书,该orderer.example.com面向example.com这个order组织证书
| | -- aaa@qq.com.com-cert.pem
| |-- cacerts: Order用于校验用户证书,过程--》openssl verify -CAfile ./cacerts/ca.example.com-cert.pem admincerts/Admin\@example.com-cert.pem
| | -- ca.example.com-cert.pem
| |-- keystore: 这个Order操作区块时,进行签署的私钥
| | -- 16da15d400d4ca4b53d369b6d6e50a084d4354998c3b4d7a0934635d3907f90f_sk
| |-- signcerts
| | -- orderer.example.com-cert.pem
| -- tlscacerts
| -- tlsca.example.com-cert.pem
-- tls: Order对外服务时使用的私钥(server.key)和证书(server.crt),ca.crt是签注这个证书的CA,需要提供给发起请求的一端。
|-- ca.crt
|-- server.crt
-- server.key
一个peer的证书结构,都是针对peer0在org1的证书,注意不是说操作peer0的user的证书:
tree -A ./certs/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/
├── msp
│ ├── admincerts
│ │ └── aaa@qq.com.example.com-cert.pem : peer0启动是会读取改证书,拥有该证书的人就是admin
│ ├── cacerts : 用去验证这个peer0的signcerts的证书 --》 openssl verify -CAfile cacerts/ca.org1.example.com-cert.pem signcerts/peer0.org1.example.com-cert.pem
│ │ └── ca.org1.example.com-cert.pem
│ ├── keystore : 这个peer0的私钥,操作区块使用
│ │ └── bc2cc295ee6df54d35e6f5df6c0cdd297fb0486eeb81cd5058ec2536ef8afe20_sk
│ ├── signcerts : peer0签名时使用的证书,非grpc时候的
│ │ └── peer0.org1.example.com-cert.pem
│ └── tlscacerts
│ └── tlsca.org1.example.com-cert.pem
└── tls
├── ca.crt
├── server.crt
└── server.key
对于用户证书,每个peer组织都有对应的users和Adminuser,每个Order也有对应的user和admin user, 结构类似下面
├── aaa@qq.com.example.com
│ ├── msp
│ │ ├── admincerts
│ │ │ └── aaa@qq.com.example.com-cert.pem
│ │ ├── cacerts
│ │ │ └── ca.org1.example.com-cert.pem
│ │ ├── keystore
│ │ │ └── fefe0cc627c067775b1fe1a1809fe8fb9dfe0f327d32682cc51837f10f78947c_sk
│ │ ├── signcerts
│ │ │ └── aaa@qq.com.example.com-cert.pem
│ │ └── tlscacerts
│ │ └── tlsca.org1.example.com-cert.pem
│ └── tls
│ ├── ca.crt
│ ├── client.crt
│ └── client.key
└── aaa@qq.com.example.com
├── msp
│ ├── admincerts
│ │ └── aaa@qq.com.example.com-cert.pem
│ ├── cacerts
│ │ └── ca.org1.example.com-cert.pem
│ ├── keystore
│ │ └── 9d0a6cb707c9cf1a21481b28dee69ee0017669dd07bb3b33911fc6090e109756_sk
│ ├── signcerts
│ │ └── aaa@qq.com.example.com-cert.pem
│ └── tlscacerts
│ └── tlsca.org1.example.com-cert.pem
└── tls
├── ca.crt
├── client.crt
└── client.key
首先要理解证书是有分层次的使用的,例如:order1要向所属的order组织提供证书,该order组织又要向整个链提供证书。peer0需要跟它所属的peer组织提供证书,这个peer组织又需要向整个链提供证书。
对于每个order组织而言,它需要提供证书来证明它是属于这整个链中的一个order组织,证书是放在(/root/fabric-deploy/certs/ordererOrganizations/example.com组织目录下面,包括msp,tls, 这两个是面向整个链的证书, users是这个order组织内部user,能操作这个order组织的用户,由于这个是SOLO模式,只有单个Order,也就是单个order组织,所以只看到个example.com组织和orderer.example.com), 每个order要提供给它的在该组织证书,放在(/root/fabric-deploy/certs/ordererOrganizations/example.com/orderers/orderer.example.com,这里面只有 msp,tls,面向该order组织)。
peer组织跟order组织证书结构是类似,对于每个peer组织而言,它需要对整个网络链提供证书,证明我是属于这个链的peer组织,以org1为例: /root/fabric-deploy/certs/peerOrganizations/org1.example.com/ 下面的msp,tls也是面向整个链,users是整个peer组织的users。这个/root/fabric-deploy/certs/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/ 下面放的就是peer0面向这个peer组织(org1)的证书。
生成运行目录
先生成部署目录,后面运行目录有什么问题,可以直接用部署目录替换。
orderer.example.com:
mkdir orderer.example.com
#copy order命令工具
cp bin/orderer orderer.example.com/
#copy order的证书
cp -rf certs/ordererOrganizations/example.com/orderers/orderer.example.com/* orderer.example.com/
cd orderer.example.com/
#生成order的配置文件,里面有关于这个order的端口配置
vi orderer.yaml
General:
LedgerType: file
ListenAddress: 0.0.0.0
ListenPort: 7050
TLS:
Enabled: true
PrivateKey: ./tls/server.key
Certificate: ./tls/server.crt
RootCAs:
- ./tls/ca.crt
# ClientAuthEnabled: false
# ClientRootCAs:
LogLevel: debug
LogFormat: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
# GenesisMethod: provisional
GenesisMethod: file
GenesisProfile: SampleInsecureSolo
GenesisFile: ./genesisblock
LocalMSPDir: ./msp
LocalMSPID: OrdererMSP
Profile:
Enabled: false
Address: 0.0.0.0:6060
BCCSP:
Default: SW
SW:
Hash: SHA2
Security: 256
FileKeyStore:
KeyStore:
FileLedger:
Location: /opt/app/fabric/orderer/data
Prefix: hyperledger-fabric-ordererledger
RAMLedger:
HistorySize: 1000
Kafka:
Retry:
ShortInterval: 5s
ShortTotal: 10m
LongInterval: 5m
LongTotal: 12h
NetworkTimeouts:
DialTimeout: 10s
ReadTimeout: 10s
WriteTimeout: 10s
Metadata:
RetryBackoff: 250ms
RetryMax: 3
Producer:
RetryBackoff: 100ms
RetryMax: 3
Consumer:
RetryBackoff: 2s
Verbose: false
TLS:
Enabled: false
PrivateKey:
#File: path/to/PrivateKey
Certificate:
#File: path/to/Certificate
RootCAs:
#File: path/to/RootCAs
Version:
创建存放数据文件夹
mkdir data
生成peer0:
cd ../
mkdir peer0.org1.example.com
cp bin/peer peer0.org1.example.com/
cp -rf certs/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/* peer0.org1.example.com/
cd peer0.org1.example.com/
vi core.yaml
core.yaml内容如下
logging:
peer: debug
cauthdsl: warning
gossip: warning
ledger: info
msp: warning
policies: warning
grpc: error
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
peer:
id: peer0.org1.example.com
networkId: dev
listenAddress: 0.0.0.0:7056
address: 0.0.0.0:7056
addressAutoDetect: false
gomaxprocs: -1
gossip:
bootstrap: 127.0.0.1:7056
bootstrap: peer0.org1.example.com:7056
useLeaderElection: true
orgLeader: false
endpoint:
maxBlockCountToStore: 100
maxPropagationBurstLatency: 10ms
maxPropagationBurstSize: 10
propagateIterations: 1
propagatePeerNum: 3
pullInterval: 4s
pullPeerNum: 3
requestStateInfoInterval: 4s
publishStateInfoInterval: 4s
stateInfoRetentionInterval:
publishCertPeriod: 10s
skipBlockVerification: false
dialTimeout: 3s
connTimeout: 2s
recvBuffSize: 20
sendBuffSize: 200
digestWaitTime: 1s
requestWaitTime: 1s
responseWaitTime: 2s
aliveTimeInterval: 5s
aliveExpirationTimeout: 25s
reconnectInterval: 25s
externalEndpoint: peer0.org1.example.com:7056
election:
startupGracePeriod: 15s
membershipSampleInterval: 1s
leaderAliveThreshold: 10s
leaderElectionDuration: 5s
events:
address: 0.0.0.0:7057
buffersize: 100
timeout: 10ms
tls:
enabled: true
cert:
file: ./tls/server.crt
key:
file: ./tls/server.key
rootcert:
file: ./tls/ca.crt
serverhostoverride:
fileSystemPath: /opt/app/fabric/peer/data
BCCSP:
Default: SW
SW:
Hash: SHA2
Security: 256
FileKeyStore:
KeyStore:
mspConfigPath: msp
localMspId: Org1MSP
profile:
enabled: true
listenAddress: 0.0.0.0:6363
vm:
endpoint: unix:///var/run/docker.sock
docker:
tls:
enabled: false
ca:
file: docker/ca.crt
cert:
file: docker/tls.crt
key:
file: docker/tls.key
attachStdout: false
hostConfig:
NetworkMode: host
Dns:
# - 192.168.0.1
LogConfig:
Type: json-file
Config:
max-size: "50m"
max-file: "5"
Memory: 2147483648
chaincode:
peerAddress:
id:
path:
name:
builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)
golang:
runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)
car:
runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION)
java:
Dockerfile: |
from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)
startuptimeout: 300s
executetimeout: 30s
mode: net
keepalive: 0
system:
cscc: enable
lscc: enable
escc: enable
vscc: enable
qscc: enable
logging:
level: info
shim: warning
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
ledger:
blockchain:
state:
stateDatabase: goleveldb
couchDBConfig:
couchDBAddress: 127.0.0.1:5987
username:
password:
maxRetries: 3
maxRetriesOnStartup: 10
requestTimeout: 35s
queryLimit: 10000
history:
enableHistoryDatabase: true
帮到执行目录
mkdir -p /opt/app/fabric/{orderer,peer}
cp -rf ./orderer.example.com/* /opt/app/fabric/orderer/
cp -rf ./peer0.org1.example.com/* /opt/app/fabric/peer/
创建传世块且运行order server
./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./genesisblock
cp ./genesisblock /opt/app/fabric/orderer/
cd /opt/app/fabric/orderer/
./orderer 2>&1 |tee log
看到这条就成功了
Start -> INFO 154 Beginning to serve requests
换个终端,运行peer0 server
cd /opt/app/fabric/peer
./peer node start 2>&1 |tee log
看到下面就证明成功了
[nodeCmd] serve -> INFO 033 Starting peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[10.0.2.15:7056]
2018-06-23 15:23:58.621 CST [nodeCmd] serve -> INFO 034 Started peer with ID=[name:"peer0.org1.example.com" ], network ID=[dev], address=[10.0.2.15:7056]
2018-06-23 15:23:58.621 CST [nodeCmd] func7 -> INFO 035 Starting profiling server with listenAddress = 0.0.0.0:6363
其他的peer也想org1 peer0那样配置就可以了.
用户
重新开个order server终端,需要从org1 peer0组织的角度去创建channel,创建的是admin user,对于普通的user也是类似配置
cd /root/fabric-deploy
mkdir Admin@org1.example.com
# cp peer0.org1.example.com/core.yaml Admin\@org1.example.com/
cp -rf certs/peerOrganizations/org1.example.com/users/Admin\@org1.example.com/* Admin\@org1.example.com/
#peer0 的配置
cp peer0.org1.example.com/core.yaml Admin\@org1.example.com/
#用于执行peer命令
cp bin/peer Admin\@org1.example.com/
cd Admin\@org1.example.com/
#用于封装一些常用变量
vi peer.sh
TH=`pwd`/../bin:$PATH
export FABRIC_CFG_PATH=`pwd`
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_TLS_CERT_FILE=./tls/client.crt
export CORE_PEER_TLS_KEY_FILE=./tls/client.key
export CORE_PEER_MSPCONFIGPATH=./msp
export CORE_PEER_ADDRESS=peer0.org1.example.com:7056
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_TLS_ROOTCERT_FILE=./tls/ca.crt
export CORE_PEER_ID=cli
export CORE_LOGGING_LEVEL=INFO
./peer $*
#test
./peer.sh node status
下面结果即是成功
//
status:STARTED
2018-06-23 16:21:30.426 CST [main] main -> INFO 001 Exiting.....
//
当拥有了admin权限的用户(其实是指定的证书是不是admin证书,则判断这个人是不是admin),就可以开始channel的创建。
Channel创建
生成mychannel.tx 文件
./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx mychannel.tx -channelID mychannel
为mychannel 生成可指定org1的Anchor peer 的配置文件
./bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP
为mychannel 生成可指定org2的Anchor peer 的配置文件
./bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP
把order server里面的 org1 peer0指定为org1的锚点peer,另外的 peer类似
./peer.sh channel update -o orderer.example.com:7050 -c mychannel -f ../Org1MSPanchors.tx --tls true --cafile ./tlsca.example.com-cert.pem
create channel
./peer.sh channel create -o orderer.example.com:7050 -c mychannel -f ../mychannel.tx --tls true --cafile tlsca.example.com-cert.pem
./peer.sh channel list
到此链与channel的搭建就完了。
Chaincode
Refer:
http://www.lijiaocn.com/%E9%A1%B9%E7%9B%AE/2018/04/26/hyperledger-fabric-deploy.html
推荐阅读
-
HyperLedger Fabric(高可用之kafka部署)
-
Hyperledger Fabric 手动部署
-
HyperLedger Fabric(单机Solo版)
-
HyperLedger Fabric(First-Network)
-
手动将wordpress部署在sae 博客分类: wordpress saewordpress
-
Hyperledger Fabric1.4.2 主要更新内容:从Kafka迁移到Raft
-
Hyperledger Explorer区块链浏览器部署
-
python fabric实现远程部署
-
python fabric实现远程部署
-
Hyperledger e2e_cli 示例部署遇到问题及解决方法(2) 有运行成功图