Hyperledger Fabric v1.0 on a Raspberry Pi Docker Swarm – Part 1

There have already been articles published on the use cases for using IoT with a private Blockchain.  The possibilities are really exciting but what if we could run the blockchain ON our IoT network.  That sounds like a fun project to me!

With that goal in mind and a bit of research I was  led to Hyperledger Fabric.  To learn more about it check this out: Hyperledger Overview.  Getting Hyperledger Fabric to run on Raspberry Pi presented several major hurdles:

  • No one had compiled the project for the ARM architecture before.
  • There wasn’t any documentation on how to run the Hyperledger on a Docker Swarm.

But hurdles are what make projects fun right? So let go through the steps so that you can setup your own.

  1. Setting up a Hyperledger Fabric development environment on a Raspberry Pi
  2. Building Hyperledger Fabric on Raspberry Pi
  3. Setting up a Docker Swarm on Raspberry Pi
  4. Deploying a Hyperledger Fabric network on the Swarm with Docker Stack
  5. Running the End-to-End “Build you First Network” scenario to validate our build.

Lets get started!

Continue reading…

Putting a newer version of Node.js on LinkIt Smart 7688 Duo

Today I’m going to step you through putting a newer version of Node JS on your LinkIt Smart 7688.  The default version of node available is 0.12.7 which, let’s face it, is completely outdated and essentially useless for to any serious Node.js developers.  Now there are some ongoing challenges with putting a completely up to date version of Node on the MIPS architecture. This is because there are key libraries needed to build Node JS that are not updated on MIPS yet… so since those libraries out out of date we can only get so far.

Setup your linux machine.

I use Ubuntu server 16.04.2 LTS at the time of writing this tutorial.  Get Virtual Box, download an Ubunut 16 .ISO, create a new VM, make sure your disk is around 50GB (5GB is not enough… found this out the hard way), select the ISO you downloaded, install the operating system (be sure to add SSH for convenience).

I alway use SSH for copy paste commands.  To setup you VM so you can ssh to it:

  1. Open the setting for the VM (you can do this while its running)
  2. Go to Network > Adapter 1 > Advanced > Port Forwarding
  3. Forward 127.0.0.1:2200 to 10.0.2.15:22
    • Check your ifconfig to verify your VMs IP address

SSH to it from your host with this:

ssh usename@127.0.0.1 -p 2200

Alternatively you can put the adapter in Bridge mode… then check ifconfig for the VMs IP.

Setup for the build

Following for guide here with modified instruction for things they forgot:

https://docs.labs.mediatek.com/resource/linkit-smart-7688/en/tutorials/firmware-and-bootloader/build-the-firmware-from-source-codes

and with help from the nxhack who is running this repo:

https://github.com/nxhack/openwrt-node-packages/tree/for-15.05

  1. Add the python package (if you didn’t install during Ubuntu OS install)
  2. Install prerequisite packages to build the firmware:
  3. Download the OpenWrt CC source codes:
  4. Prepare the default configuration file for feeds:
  5. Add the LinkIt Smart 7688 development board’s and NodeJS to the builds feed:

    Add the following lines to the bottom of the file:
    src-git linkit https://github.com/MediaTek-Labs/linkit-smart-7688-feed.git
    src-git node https://github.com/nxhack/openwrt-node-packages.git;for-15.05

  6. Update the feed information for all available packages to build the firmware:
  7. Change the packages installed as default:
  8. Fix build error of depend on node.js version:
  9. Hack for wifi driver so build completes
    Copy kernel objects for support kernel 3.18.45:
    see : MediaTek-Labs/linkit-smart-7688-feed#37
  10. Install all packages:
  11. Use node.js custom packages:
  12. Prepare the kernel configuration:
    • Select the following options:
      • Target System: Ralink RT288x/RT3xxx
      • Subtarget: MT7688 based boards
      • Target Profile: LinkIt7688
    • (Optional) GO into Languages > Node.js > Configuration (under the node package) > Select your desired version.
    • (Optional) Enable modules you want (Caution there is a 30MB limit so not all will fit)
      • (Recommended) node-npm (it is a separate module in v6)
    • Save and exit (use the default configuration file without any modification)
  13. !!!BEFORE YOU START!!!
    RUN THIS FROM YOUR VM NOT OVER SSH
    This command will take a while and if your ssh pipe breaks so will your build
    Start the compilation process:


    There are several options you can use with the make command that are helpful.
    – V=99 (gives verbose output during build) or
    – V=1 (shows error, warnings, and notes “less verbose”)
    – &> output.log ( on the end – stores output in a log for later viewing)

A Few Hours Later…

  • After the build process is finished successfully, the resulted firmware file will be under “bin/ramips/openwrt-ramips-mt7688-LinkIt7688-squashfs-sysupgrade.bin”. If its’ not there check the output in your log file or on your screen.  Depending on the hardware resources of the host environment, the build process may take more than 2 hours.
  • You can use this file to update the firmware through Web UI or rename it to lks7688.img to update through a USB drive.

A big thanks to nxhack for making this build possible!

Download images for the version of Node you want:

lks7688-node-v4.img

MD5: d7f724da93a1d916bf777f80516a0f33

SHASUM: f74182c70b937909ad1cba6f97e40b5dd3891962

lks7688-node-v6.img

Includes NPM.

MD5: a17c672f87c4b8fa49a253d5534a9229

SHASUM: 8b19b20d23f1faa94c9dd084ca0271904d1dfa5e

Add a New Page to ReactGo

Adding a new page to your reactGo project can be a lengthy process.  I’m hoping to write a yeoman generator to do this for me in the near future but for now…  Here are the steps.

Steps:

  1. Add a new route to your app >routes.js
    Add Something like: <Route path="routeName" component={page} onEnter={requireAuth} fetchData{[fetchItemData, fetchCartData]} />
    to the Route section.
    Add the necessary supporting imports.
  2. Add a new entry to the navigation.
  3. Create a new page in the pages folder.
    Duplicate existing page, then open.  Update references to new component name.
  4. Add reference to app > pages > index.js
    export { default as ContianerName } from 'pages/ContainerName';
  5. Create a new data fetcher (if necessary)
    Duplicate existing and update path to get data from server and action type to dispatch.
  6. Update the app > fetch-data > index.js with the new reference.
    export { default as fetchMyData } from './fetchMyData';
  7. If a new data set is needed follow these steps: Add a New Dataset to the Store
  8. Create a new Container
    Duplicate an existing component then design and build your page.
  9. After your pages design is complete componentatize what you should.

Questions for yeoman generator:

  • Provide Route name: String
  • Provide page/component name: String
  • Does the page already exist? Boolean
  • If No => Provide Page Description: String
  • Would you like to generate a new CSS Module for the page? Boolean
  • Auth required? Boolean
  • Does it need to fetch any data? Boolean
    • If yes => How many? Number
    • For count:
      • Provide Data Fetcher name: String
      • Does it already exist? Boolean
      • If No => Does dataset exist?

Dockerizing ReactGo

First up you’ll need to be able to successfully run a build for your current ReactGo app.  I assume you have docker installed.

npm run build

Once you’ve got a successful build on your hands go ahead and create your dockerfile.  Example one below:

You can use a later version of Node if you like I built and tested my on v6.10.

docker build -t <image_name> .

NOTE: DO NOT FORGET THE PERIOD AT THE END (that caused me a serious headache one day…)

While that is building lets launch a mongo container to run our database (we’ll link to it later).  I assume you’ve already pulled mongo from the docker hub: docker pull mongo

docker run -d --name <db_container_name> mongo

Now let test our container locally… I’m going to bind our exposed 80 port to port 80 on localhost to demonstrate how it’s done.  This same principal applies when launching a container out in a production docker environment.

docker run -d -p 127.0.0.1:80:80 -it --link <db_container_name>:mongo --name <app_container> <image_name>

NOTE: notice the MONGODB_URI ENV variable we set in the docker file.  Take special note of underlined part that follows: the mongodb://mongo/MyAppCollection.  This MUST match the link name you give in your run command (e.g. <db_container_name>:mongo).

Check your local host on port 80 and you should see your app.  If not check the logs by using the docker logs <container_ID> command.  Container IDs can be found with docker ps -a command.

After you’ve successfully tested your container you’re ready to push the image to your registry.  You can use docker hub but I’m using Bluemix, since it’s private and I get it for free ;).  After you login to your registry tag your image for release:

Bluemixers need your name space? try bx ic namespace-get

docker tag <image_name> registry.ng.bluemix.net/<name_space>/<app_name>:<version_tag>

The version tag is optional but recommended.  It will default to latest otherwise. Then push it (push it real good)

docker push registry.ng.bluemix.net/<name_space>/<app_name>:<version_tag>

Then wait… forever…

If your on Bluemix you’ll probably want to push up a mongo container as well. (remember to start one up so we can link to it like we did on local host)

For those of you who are not using Bluemix substitute the bx ic commands below for your registry (should be just docker for most of the world)

bx ic cpi mongo registry.ng.bluemix.net/<name_space>/mongo

From there I usually create a dummy container with the Bluemix GUI (website) so I can provision a public IP through Bluemix which I can then reuse when I launch a container with this command.  There is probably a way to provision an IP through CLI but I’m too lazy to look it up.

bx ic run -d -p <my_public_ip>:80:3000 -it --link <db_container_name>:mongo --name <app_container_name> <name_space>/<image_name_or_id>

 

React, Emmet, ESLint, Babel Packages

First make sure you have the following packages added to your project:

npm install --save-dev babel-eslint eslint eslint-config-airbnb eslint-plugin-react eslint-plugin-jsx-a11y eslint-plugin-import mocha

On a fresh instal of Sublime text install the following with package control.

  • Babel
  • Emmet
  • SublimeLinter
  • SublimeLinter-contrib-eslint

After installation restart Sublime.  Now your files will have linting on them so you can see how nasty your code is. ;)

Sample .eslintrc for refference

 

ReactGo – Steps to add a new data set to the store.

I’m documenting this because I want to make sure I catch everything for the eventual yeoman generator I intend to build….

Using my Material-UI modded version of ReactGo boilerplate (commit d5f71395f8eca8a0d9a1be162bbd20674d7cdcdd, Feb 16, 2017)

In order to support multiple datasets being loaded into one page from the router I’ve modded the current base to suit my needs summary of changes are on my version of the repo.  These changes were based on the work of @k1tzu & @StefanWerW

Right now I’ve got it down to a 12 step process… (whew!) lets get into it…

Continue reading…

motoChecker v3

Well, v3 is done and tested added support for stored validations.

This version has more Dewbacks so you know it’s better!

Git Hub repo is the same as before

Matt suggested I change the name to motoChecker. Since my last name is Motacek. I like it. DONE!

I’ve spent all day on documentation for it and I’m bored with writing now. I’ve learned a few things, though…

Content revised… my hate for SharePoint faded after I asked someone smarter than me to teach me how to use the damn thing.

It’s ok now.

Setting up Secure SSH, X11 Forwarding and VNC on CentOS

I’m more of an Ubuntu fan personally but one of the people I work with wants to use the Red Hat.  So instead of asking our boss for $800 I decided to opt for CentOS, same binaries so I’m hoping it will work.  These instructions will get you started with remote control for your CentOS server.

We’ll configure three type of “remote access” for various use cases.

  1. SSH
  2. X11 Forwarding
  3. VNC

I am not a sysadmin purist so I don’t care about sysadmins who say that servers don’t need GUIs.  This server is for a wide range of users and needs to support varying comfort levels with Linux based systems.

Step 1 – Configuring SSH

After I got CentOS 7 installed I opened up the ssh config. Security is a big concern for this system as the previous Windows installation on the server had been hacked and used maliciously. Since our company doesn’t have any full time sys admins I want to make sure that it’s as secure as possible.

sudo nano /etc/ssh/ssh_config

I first wanted to check to ensure that it didn’t allow ssh with a password. It was enabled by default so I turned it off for security reasons. I also wanted to setup X11 for remote access since I’d be administrating this thing from Wisconsin when the server will be in St. Louis. Doing things like limiting users, changing the standard ssh port and using Public/Private Key Authentication all harden the system.

PasswordAuthentication no
X11Forwarding yes
X11DisplayOffset 10
AllowUsers myself myCoworker
Port 22XXXX

Since we’ve changed the port make sure ot make an entry in the Firewall rules in the permanent section. for the port you’ve specified.

GNOME has a Firewall Configuration in Applications > Sundry > Firewall

We’ll need to restart the sshd service as well as update SELinux (not sure why yet…)

sudo service sshd restart
sudo semanage port -a -t ssh_port_t -p tcp 22XXXX

Now that my ssh_config is ready I’ll need to generate some security keys so that we can ssh to it.

For X11 and SSH on Windows I prefer to use MobaXterm.  On the client you wish to connect form we need to generate some RSA keys.

Open up MobaXterm go to Tools > SSH Key Generator

Generate a new key pair for yourself and save the keys.  I always recommend copying the raw public key into a text file so you can paste the contents if necessary.  OpenSSH will complain if the format of the key isn’t what it likes.

Put the key on a jump drive and pop it in the server.  Then do some concatenation to the key file.

cd /the_location/of_the/jump_drive/
mkdir ~/.ssh
chmod 700 ~/.ssh
cat public_key.txt >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
restorecon -Rv ~/.ssh

Make sure you delete the key file after you’ve concatenated it.

Now we test our SSH connection form the client.  Open up MobaXterm and create a new SSH connection.  Specify the IP of the server, the user name you’ve allowed in the SSH config, the port you specified in the SSH config and use the private key you generated in the Advanced SSH setting tab.  Leave the X11 forwarding and compression enabled.  If you’re having trouble connecting try checking if your router is blocking the port you specified or and additional firewall.

Step 2 – X11 Forwarding – Running Applications Remotely

For those more familiar with Windows X11 Forwarding is similar to the concept of Remote Desktop but a little different.  With X11 you can not only run the desktop application “GNOME” but applications individually if you choose.  So if you have an installer you want to run you can simple execute the installer application with out the burden of running a “Desktop” over the connection.

Once you’ve successfully connected via SSH it time to configure X11

First make sure your OS is up to date… this could take a while… If your OS isn’t updated you might have some conflict trying to install “X Windows System”

su root
yum update
yum groupinstall "X Window System" "Desktop" "Fonts"

Once your OS is up to date and you have X Windows System installed you should be able to run things like “gedit” from ModaXterm and use the application remotely by simply typing in the name of the application.

X11 can be a bit slow if you plan to use the desktop a lot like a windows admin would so nex tup we’ll setup VNC

Step 3. VNC

X below represents your desired port number it will be an offset from 5900 which will result in 590X.

su root
yum install tigervnc-server xorg-x11-fonts-Type1
cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:XXXX.service
nano /etc/systemd/system/vncserver@:X.service

Replace all instances of <USER> with the user name for the connection, there will be two.

Update the firewall to allow the connection in a similar way as earlier.

Start the server as the user…

vncserver

Set a password.

Now Reload the deamon as root, start the service and set it to run on startup.

su root
systemctl daemon-reload
systemctl start vncserver@:X.service
systemctl enable vncserver@:X.service

Go to your client and download a VNC client if you don’t already have one.

Now before you go connecting to your server note:
THIS VNC CONNECTION IS NOT ENCRYPTED

So let’s set that up ;)

Open up MobaXterm and click “Tunneling”

  • Create a new tunnel.
  • Local port forwarding
  • Local Port  = 5900
  • SSH Server = Server IP, username, and SSH port set above
  • Remote Server = Server IP and Port assigned to VNC user. (the 590X one…)

Now start your tunnel.  Then open up VNC Viewer and use localhost:5900 as your destination.

VNC will complain that this is an “unencrypted” connection but its not ;)

You are now ready to rock an roll!

MotoValidation 2.0

V2.1 is out!

It’s been a while since I’ve posted something on my blog but it’s also been a while since I was working on something that I felt worth sharing.

I’m proud to present motoValidation v2 a big improvement to my original creation.  The goal of this script is to limit the amount of code that a developer must write in order to accomplish common validation use cases in IBM BPM platform.  This script is currently targeted for Heritage coaches only… I’m not sure if I like client side coaches yet… I’m sure they’ll grow on me but they need some serious stuff before I’ll consider them usable.

With motoValidation we make some assumptions:

  • The most common type of validation is a required field
  • The target for your validation is the same as the binding
  • We should not make developers provide redundant information over and over (if I have to type tw.whatever.thing.thing.thing one more time I’m gonna snap…)
  • Validation should require as little code as possible

Get it from Git

Continue reading…