And now for the thrilling conclusion to… HLv1_RPiDS! (<— a what now?)
- Setting up a Hyperledger Fabric development environment on a Raspberry Pi
- Building Hyperledger Fabric on Raspberry Pi
- Setting up a Docker Swarm on Raspberry Pi
- Deploying a Hyperledger Fabric network on the Swarm with Docker Stack and testing with BYFN.
In this section we’ll go over the steps I take to launch the network and talk through some of the configuration sections to watch out for as you setup your own.
But first a quick proof of work demonstration:
First things first, verify your swarm is up and running with
docker node ls. You should see all your nodes in the active state, if you don’t start troubleshooting, (tip: start with a reboot for that node 😉 )
Create an Attachable Overlay Network
In order for our nodes to communicate we’ll need an overlay network in our case we want to use an attachable one.
docker network create -d overlay --attachable hyperledger-fabric
Clone the repo to each device
Next we need the repo with all the magic in it. You’ll want to clone or fork it yourself and update the constraints names to the hostnames you used for your RasPis and possibly other things like your username. See this commit to know what I’m talking about (as I just did it myself for this project). This repo needs to be cloned on each of your workers as well. This is primarily to pass the certificates easily. I’ll go into a full dissection of the docker compose file I’ve used and what you should change and do in other situations later in this article.
If you have suggestions or improvement for the compose configuration please submit a pull request. I’ve been trying to think of way to make it more universal but I always get sidetracked…
Clone this or your modified repo to your Swarm master and workers:
git clone https://github.com/Cleanshooter/hyperledger-pi-composer.git
I actually like watching the nodes communicate back and forth so I have some monitoring setup, pus it’s helpful if something isn’t working and you need to debug. If you’d like to watch was well…
on Node 1 run:
tail ./hyperledger-pi-composer/logs/peer1org1log.txt -f
on Node 2 run:
tail ./hyperledger-pi-composer/logs/peer0org2log.txt -f
on Node 3 run:
tail ./hyperledger-pi-composer/logs/peer1org2log.txt -f
In my setup I have 4 nodes in my swarm and 6 containers:
- 1 Orderer
- 2 Organizations with 2 peers each.
- 1 CLI image to run the BYFN script
If you change the architecture in your version of the docker compose file you’ll need to update the crypto-config.yaml, generate new certificates and a new genesis block as well.
Start it up
The docker compose file we’ll use will not only start-up your nodes on specific workers but it will automatically mount the needed volumes for the certs and keys from the repo. In another tutorial I’ll go over how to generate your own certs (leave comments if interested) but for the sake of this introductory getting started tutorial I’m going to skip that part.
Not only will the docker compose file start up your containers and set the proper configuration but it will also automatically run our Build Your First Network [BYFN] test.
On your Master node cd into the repo you cloned and run:
docker stack deploy --compose-file docker-compose-cli.yaml HLFv1_RPiDS && docker ps
IMMEDIATELY after it’s finished starting up look at the docker ps list for the CLI container ID (the one running jmotacek/fabric-tools:armv7l-1.0.7)
Find the ID and run
docker logs -f [contianer ID]
This will show you the progress of the BFYN test as it goes through the steps to work with the various peers and nodes. The first time you run this it will take a while to complete as the various nodes download the necessary docker images to launch the containers they need to fulfill the BYFN actions. The second run should complete in under 5 minutes.
Shut it down
Once your happy you can shut your test network down like so:
docker stack rm HLFv1_RPiDS
If you have any issues with your tests let me know and I’ll try to help if I can. I’ve run into a multitude of issues of my tests so I might have some ideas. If you are having issues please post the actual outputs and bugs… otherwise it can be hard to follow.
Dissecting the Docker Compose file
As promised I’m going to provide some more details on the docker compose file that runs it all. I’ve added some comments to it below to provide more background and context than the one out on github.
# Copyright Joe Motacek All Rights Reserved. # # SPDX-License-Identifier: Apache-2.0 # version: '3' services: orderer: image: jmotacek/fabric-orderer:armv7l-1.0.7 environment: # The Orderer General stuff is fairly common in the setup for most HL configs # I used most of the settings found in other repos # You can configure HL to work with out TLS but I feel like that would be pointless since you really needed it in a private blockchain. - ORDERER_GENERAL_LOGLEVEL=debug - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 - ORDERER_GENERAL_GENESISMETHOD=file - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block - ORDERER_GENERAL_LOCALMSPID=OrdererMSP - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp - ORDERER_GENERAL_TLS_ENABLED=true - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt] # This setting is CRUCIAL! # I spent hours digging in to the repos G Code and debugging to figure out this one out... # Basically OOTB HL expects to run on a much larger system than a RasPi and the default memory allocation for a container # exceeded what a RasPi actually has. This setting changes the max limit the system will try to take. - CORE_VM_DOCKER_HOSTCONFIG_MEMORY=536870912 # Here you can designate your own images (if you want to try an 1.1 build for example) # NOTE if you did your own builds you'll need to push them to Docker Hub or somewhere that your Swarm workers can find them # Workers won't see images that only exist on your master node (or wherever you built your images) - CORE_CHAINCODE_BUILDER=jmotacek/fabric-ccenv:armv7l-1.0.7 - CORE_CHAINCODE_GOLANG_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_CAR_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_JAVA=jmotacek/fabric-javaenv:armv7l-1.0.7 working_dir: /opt/gopath/src/github.com/hyperledger/fabric # I can't remember which of these two is more important but if these aren't set the other nodes won't be able to communicate properly with each other. # I think it's both honestly... I know HL needs the hostname for something and the networks alias is needed so the nodes can communicate. hostname: orderer.example.com networks: hyperledger-fabric: aliases: - orderer.example.com volumes: # Genesis blocks are created with the generate script which I can cover in a separate tut if anyone is interested. - /home/jmotacek/hyperledger-pi-composer/channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block - /home/jmotacek/hyperledger-pi-composer/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp - /home/jmotacek/hyperledger-pi-composer/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls ports: - 7050:7050 # You don't necessarily need to constrain where Swarm places your nodes, I wanted to in this so I could do some fun stuff with blinky lights. (See video DEMO) deploy: placement: constraints: - node.hostname == hyperledger-swarm-master command: orderer peer0_org1: image: jmotacek/fabric-peer:armv7l-1.0.7 environment: - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock # The network mode tells the generated containers to use the external network we defined. # This way generated chaincode containers can attach to it - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-fabric - CORE_LOGGING_LEVEL=DEBUG - CORE_PEER_TLS_ENABLED=true - CORE_PEER_GOSSIP_USELEADERELECTION=true - CORE_PEER_GOSSIP_ORGLEADER=false - CORE_PEER_PROFILE_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt - CORE_PEER_ID=peer0.org1.example.com - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP - CORE_VM_DOCKER_HOSTCONFIG_MEMORY=536870912 - CORE_CHAINCODE_BUILDER=jmotacek/fabric-ccenv:armv7l-1.0.7 - CORE_CHAINCODE_GOLANG_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_CAR_RUNTIME=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_JAVA=jmotacek/fabric-javaenv:armv7l-1.0.7 working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer hostname: peer0.org1.example.com networks: hyperledger-fabric: aliases: - peer0.org1.example.com volumes: - /var/run/:/host/var/run/ - /home/jmotacek/hyperledger-pi-composer/logs:/home/logs - /home/jmotacek/hyperledger-pi-composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp - /home/jmotacek/hyperledger-pi-composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls ports: - 7051:7051 - 7053:7053 deploy: placement: constraints: - node.hostname == hyperledger-swarm-master # You really only need the peer node start portion of the command below. # I added the external logging for convenience and demos command: bash -c "peer node start > /home/logs/peer0org1log.txt 2>&1" # The other peers are just slightly modified permutations of the original # You can see them on github but there isn't anything unique to mention about each one besides that numbers change... peer1_org1: peer0_org2: peer1_org2: cli: image: jmotacek/fabric-tools:armv7l-1.0.7 tty: true environment: - GOPATH=/opt/gopath - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock - CORE_LOGGING_LEVEL=DEBUG - CORE_PEER_ID=cli - CORE_PEER_ADDRESS=peer0.org1.example.com:7051 - CORE_PEER_LOCALMSPID=Org1MSP - CORE_PEER_TLS_ENABLED=true - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp - CORE_VM_DOCKER_HOSTCONFIG_MEMORY=536870912 - CORE_CHAINCODE_BUILDER=jmotacek/fabric-ccenv:armv7l-1.0.7 - CORE_CHAINCODE_GOLANG=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_CAR=jmotacek/fabric-baseos:armv7l-0.3.2 - CORE_CHAINCODE_JAVA=jmotacek/fabric-javaenv:armv7l-1.0.7 working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer # I give myself 30 seconds to start to tail the CLI container so I can watch the BYFN output command: /bin/bash -c 'sleep 30; ./scripts/script.sh; while true; do sleep 20170504; done' volumes: - /var/run/:/host/var/run/ - /home/jmotacek/hyperledger-pi-composer/chaincode:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go - /home/jmotacek/hyperledger-pi-composer/crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ - /home/jmotacek/hyperledger-pi-composer/scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/ - /home/jmotacek/hyperledger-pi-composer/channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts # This prevents the CLI containers from starting until all the other containers are running successfully. depends_on: - orderer - peer0_org1 - peer1_org1 - peer0_org2 - peer1_org2 deploy: placement: constraints: - node.hostname == hyperledger-swarm-master networks: hyperledger-fabric: aliases: - cli.example.com # This is external by design. We need to use an external attached network for the generated containers to communicate. # As HL executes chaincode it spawns containers throughout the swarm to process it. These generated containers need access to the network. # After your BYFN executes run docker ps on node 1 and node 3 and you'll see what I mean. networks: hyperledger-fabric: external: true