Cool Utility: Live Server

I just stumbled across one of those “it’s about time” utilities for front-end app development: Live Server. Long story short, you issue the command live-server from your application’s current directory and… well… that’s it. A browser pops open, your web app is loaded, and the lil’ utility watches for changes. Any changes that are made are instantly pushed to the browser.

Installation is ridiculously easy via NPM:

npm install -g live-server

Of course, there’s a slew of command line switches and parameters to make even the geekiest geek happy:

  • --port=NUMBER – select port to use, default: PORT env var or 8080
  • --host=ADDRESS – select host address to bind to, default: IP env var or 0.0.0.0 (“any address”)
  • --no-browser – suppress automatic web browser launching
  • --browser=BROWSER – specify browser to use instead of system default
  • --quiet | -q – suppress logging
  • --verbose | -V – more logging (logs all requests, shows all listening IPv4 interfaces, etc.)
  • --open=PATH – launch browser to PATH instead of server root
  • --watch=PATH – comma-separated string of paths to exclusively watch for changes (default: watch everything)
  • --ignore=PATH – comma-separated string of paths to ignore (anymatch-compatible definition)
  • --ignorePattern=RGXP – Regular expression of files to ignore (ie .*\.jade) (DEPRECATED in favor of --ignore)
  • --middleware=PATH – path to .js file exporting a middleware function to add; can be a name without path nor extension to reference bundled middlewares in middleware folder
  • --entry-file=PATH – serve this file (server root relative) in place of missing files (useful for single page apps)
  • --mount=ROUTE:PATH – serve the paths contents under the defined route (multiple definitions possible)
  • --spa – translate requests from /abc to /#/abc (handy for Single Page Apps)
  • --wait=MILLISECONDS – (default 100ms) wait for all changes, before reloading
  • --htpasswd=PATH – Enables http-auth expecting htpasswd file located at PATH
  • --cors – Enables CORS for any origin (reflects request origin, requests with credentials are supported)
  • --https=PATH – PATH to a HTTPS configuration module
  • --proxy=ROUTE:URL – proxy all requests for ROUTE to URL
  • --help | -h – display terse usage hint and exit
  • --version | -v – display version and exit

If you’re building web-based apps, or even if you’re just starting out in web development, this little gem will save you a tonne of time up front.

Enjoy! =)

Receive SMS Messages Via Email from Flowroute Phone Numbers

In today’s mobile world, people just assume every phone number is a cell phone… even if it’s clearly listed as “office” on your business card. And, in most cases, if the phone number belongs to a corporate phone system, or PBX, any text messages sent to that number are lost forever in the great bitbucket in the sky. Until now, that is! If you happen to be using Flowroute as your backend trunking provider, you can now receive any SMS text message via email.

Here’s how to do it…

1. Setup My Proxy App Using Docker
I’ve whipped up a simple Node app to make life easy for you. In short, it receives all SMS text messages, from Flowroute, and emails them to you at either a single email address or custom “wildcard” domain. Assuming you have Docker installed a public server, install it via the following command:

docker run --name flowroute-proxy -p 3000:3000 \
    -e TO_EMAIL=bruce@batmail.com \
    -e SMTP_PASS=robin4ever \
    -e SMTP_USER=bruce@batcave.com \
    -e SMTP_HOST=smtp.batcave.com 
    fredlackey/flowroute-proxy  

The settings are all done by environment variables. A complete list is in the Docker Hub:

https://hub.docker.com/r/fredlackey/flowroute-proxy/

Of course, it will be up to you to ensure your DNS and server settings are both setup with a FQDN pointing to that docker container. You’ll also need to have an SMTP account for outgoing messages.

2. Activate the API with Flowroute
Once you have a Flowroute account, head over to their Developer Portal and click on the Get API Access button. This will bounce you over to the Flowroute portal where you will enter the URL to the Docker container you setup above:

Generating Mongo / Mongoose Models

Having come from the .NET world, I have always loved the ability to whip up a quick model diagram using the SQL Server Diagram Tool. It’s painless to model your data objects, and capture a good chunk of your business, for LOB applications. And, while in that world, I relied upon the CodeSmith Generator to spit out all sorts of documents from my database.

Alas, having moved to Mac, Linux, and MEAN Stack, all this is in the past.

… until now.

DbSchema is really what started me thinking down this line. It’s written in Java and, therefore, is cross-platform. I have used it successfully on all three platforms, to replace the SQL Server Diagram Tool, and it works flawlessly.

Here’s the cool part: unlike the M$ tool, DbSchema stores its data in good ole’ XML. So, of course, I’ve created a few tools to add some awesome sauce to it…

DbSchema Parser dbschema-parser

Long story short, dbschema-parser allows you to walk the data structures using NodeJS. You may navigate from Database, to Schema, to Table, to Column, and back up again, or in any direction.

DbSchema Parser CLI dbschema-parser-cli

Since I want to use the Parser to generate files, I’m gonna need a CLI. That’s what this project brings to thy table.

DbSchema Mongoose dbschema-mongoose

Under the hood this one is ugly as sin. However, it’s the thang that gives the two projects, above, some coolness. It basically looks at your DbSchema’s data file and spits out the equivalent Mongoose model files.

Side note…

I’m also using Keybaord Maestro, on Mac, and AutoHotkey, on Windows, to help me bang out complex data diagrams with only a few keystrokes. So, that helps a great deal.

Why create this?

In short, there’s nothing stable that provides this. DbSchema is the only tool that comes close to the stability and fluidity of the SQL Server Diagram Tool. And, as for generating models, there’s nothing out there that feeds from an elegant UI. Plus, although there’s a tonne of shtuff with Yarn and Yeoman, nothing feels fully baked.

Anywhoo, I hope this helps someone. It’s ugly. I know. If anyone shows genuine interest in it, I’ll see about extending it.

Filter Out Docker Noise

Sometimes the smallest lil’ gem makes you feel great. For me, Docker’s --format option is one such gem. As much as I love Docker, for me, their commands’ output are far too verbose and noisy. In fact, the net is filled with complaints about this. However, the --format option makes them perfect… or closer to perfect. Even the noisiest command can be transformed…

… from this …

Before Docker Aliases

… to this …

After Docker Aliases

… in just a few extra keystrokes!

It outputs just the right amount of info to be particularly great for “4-up” or “2-up” arrangements…

Docker Aliases with 4 Up Display

Docker’s info for the ps command completely sucks and offers no info on this option. In short, you basically use it to tell Docker what columns to display. For example, with ps you have the following columns to choose from:

  • ID
  • Image
  • Command
  • RunningFor
  • Status
  • Ports
  • Names

So, for the example above, the syntax would be:

docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"  

Or, better yet, if you’re on Linux or macOS / OSX, take a few seconds and create aliases for dps and dpsa in your ~/.bash_aliases file by adding these two lines:

alias dps='docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"'  
alias dpsa='docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"'  

Enjoy… finally! =)

(I’ve added these two aliases to my dotfiles project, if you’re following that project.)

Pattern for Developing Complex Solution with NodeJS within Docker

So many of the examples out there, for both Node and Docker, show simple little applications. They might demonstrate how to create a container. Another might show how to crank up your first NodeJS app. However, I have yet to find one that demonstrates how to use these bag o’ widgets in a real world application.

Hopefully, my mean-docker example, on Github, will help show how to bring it all together. Something within me wants to create a small how-to video series surrounding it, however, there are several good folks out there already tackling the meat of this (check out Derick Bailey’s WatchMeCode for back-end goodness). So, who knows? In the meantime, here’s what this project will give you:

(BTW: Here’s the direct link on GitHub, in case you missed it: mean-docker)

Project Breakdown

mycompany-api0x

Three back-end microservices (for some reason the Node world is referring to them as “APIs”… which annoys the heck out of me) stubbed-out in NodeJS and Express.

mycompany-app

The front-end Angular app which calls into the example APIs.

mycompany-www

Your example company’s web site. Again, this is just stubbed out.

solution-a & solution-b

Two higher-level solutions containing all of the “good stuff” for Docker & NGINX.

Getting Started

There’s not much to it. Here’s what to do:

  1. Install Git & Docker on your development machine.
  2. Clone the Git repo to your machine:git clone https://github.com/FredLackey/mean-docker.git
  3. Navigate into either solution-a or solution-b (currently identical):cd ~/Source/Github/FredLackey/mean-docker/solution-a
  4. Spin up Docker and let’er do it’s magic:docker-compose up
  5. NGINX is listening to a few URLs specifically, so you may want to edit your /etc/hosts or %SYSTEM32%\drivers\etc\hosts file and add the following entries (a copy is in the provided %SOLUTION%/.docker/etc/hosts file):
127.0.0.1       mycompany.com www.mycompany.com
127.0.0.1       app.mycompany.com
127.0.0.1       api01.mycompany.com
127.0.0.1       api02.mycompany.com
127.0.0.1       api03.mycompany.com

Working With It

Automated “watchers” are already setup to handle all of the compiling, optimising, starting, and restarting for you. Simply do your work in the typical %PROJECT%/src/server and/or %PROJECT%/src/client folders and everything else will be taken care of for you.

On a completely clean dev machine, it should take approximately three minutes for an initial build:

Example Running Blocks

If you updated your /etc/hosts or %SYSTEM32%\drivers\etc\hosts file with the names of the servers, you may check the status of each project using any web browser:

Example Project Site

… or …

Example API Test

Limitations

The goal of this project is to get you started and help demonstrate some of the concepts… getting NGINX to talk to proxy your requests, linking docker containers, automagically detecting changes, etc. That being said, it works for this purpose but it’s not an actual working solution. If you have a need for such a thing, let me know and maybe I can spend some additional time on it.

Enjoy! =)

Develop on Docker Without Slow Dependencies

It’s common knowledge that Docker’s mounted volume support on macOS is pathetically slow (click here for more info). For us Node developers, this means starting up your app is incredibly slow because of the requisite node install command. Well, here’s a quick lil’ trick to get around that slowness.

First, a quick look at the project:

uber-cool-microservice example

Long story short, I’m mapping everything in my project’s root (./) to one of the container’s volumes. This allows me to use widgets, like gulp.watch() and nodemon to automagically restart the project, or inject any new code, whenever I modifiy a file.

This is 50% of the actual problem!

Because the root of the project is being mapped to the working directory within the container, calling npm install causes node_modules to be created in the root… which is actually on the host file system. This is where Docker’s incredibly slow mounted volumes kick the project in the nads. As is, you could spend as long as five minutes waiting for your project to come up once you issue docker-compose up.

“Your Docker setup must be wrong!”

As you’ll see, Docker is quite vanilla for this lil’ project.

First, ye ‘ole Dockerfile:

FROM ubuntu:16.04

MAINTAINER "Fred Lackey" <fred.lackey@gmail.com>

RUN mkdir -p /var/www \  
    && echo '{ "allow_root": true }' > /root/.bowerrc \
    && apt-get update \
    && apt-get install -y curl git \
    && curl -sL https://deb.nodesource.com/setup_6.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates

VOLUME /var/www

EXPOSE 3000  

And, of course, the beloved docker-compose.yml:

version: '2'

services:

  uber-cool-microservice:
    build:
      context: .
    container_name: uber-cool-microservice
    command:
      bash -c "npm install && nodemon"
    volumes:
      - .:/var/www
    working_dir: /var/www
    ports:
      - "3000"

As you can see, as-is this test project is lean, mean, and works as expected…. except that the npm install is sloooooooooow.

At this point, calling npm install causes all of the project’s dependencies to be installed to the volume which, as we all know, is the host filesystem. This is where the pain comes in.

“So what’s the ‘trick’ you mentioned?”

If only we could benefit from having the root of the project mapped to the volume but somehow exclude node_modules and allow it to be written to Docker’s union file system inside of the container.

According to Docker’s docs, excluding a folder from the volume mount is not possible. Which, makes sense I guess.

However, it is actually possible!

The trick? Simple! An additional volume mount!

By adding one line to the Dockerfile:

FROM ubuntu:16.04

MAINTAINER "Fred Lackey" <fred.lackey@gmail.com>

RUN mkdir -p /var/www \  
    && echo '{ "allow_root": true }' > /root/.bowerrc \
    && apt-get update \
    && apt-get install -y curl git \
    && curl -sL https://deb.nodesource.com/setup_6.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates

VOLUME /var/www  
VOLUME /var/www/node_modules

EXPOSE 3000  

… and one line to the docker-compose.yml file …

version: '2'

services:

  uber-cool-microservice:
    build:
      context: .
    container_name: uber-cool-microservice
    command:
      bash -c "npm install && nodemon"
    volumes:
      - .:/var/www
      - /var/www/node_modules
    working_dir: /var/www
    ports:
      - "3000"

That’s it!

In case you missed it, we added:

VOLUME /var/www/node_modules  

and

    - /var/www/node_modules

Say what!?!?

In short, the additional volume causes Docker to create the internal hooks within the container (folder, etc.) and wait for it to be mounted. Since we are never mounting the folder, we basically trick Docker into just writing to the folder within the container.

The end result is we are able to mount the root of our project, take advantage of tools like gulp.watch() and nodemon, while writing the contents of node_modules to the much faster union file system.

Quick Note re: node_modules:
For some reason, while using this technique, Docker will still create the node_modules folder within the root of your project, on the host file system. It simply will not write to it.