Categories
Programming

Easily Dockerize Node Apps

Quick script to Dockerize and tag your Node app with the current version number without having to dig through files for values. For me, this is important as I use Docker with EC2 and ECS on AWS. Using the project version number and name, from the project.json file, allows me to automagically tag the Docker image… which, in turn, allows me to easily deploy specific versions of the app or service for various release methods (blue/green, etc.).

First, the script itself …

#! /bin/bash

main() {

  local SCRIPT_PATH="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/$(basename "${BASH_SOURCE[0]}")"
  local BASE_DIR=$(dirname $(dirname $(dirname $SCRIPT_PATH)))

  local PKG_NAME=$(node -p "require('$BASE_DIR/package.json').name")
  local PKG_VER=$(node -p "require('$BASE_DIR/package.json').version")
  local CMD="cd $BASE_DIR && docker build -t $PKG_NAME:$PKG_VER  ."

  eval "$CMD"
}

main

The last three lines or so is the good stuff …

  local PKG_NAME=$(node -p "require('$BASE_DIR/package.json').name")
  local PKG_VER=$(node -p "require('$BASE_DIR/package.json').version")
  local CMD="cd $BASE_DIR && docker build -t $PKG_NAME:$PKG_VER  ."

The first two lines load the Node package and version into variables PKG_NAME and PKG_VER. That last line creates a proper command for Docker …

docker build -t my-cool-app:1.2.3

And, finally, I call this from my package.json file …

{
  "name": "my-cool-app",
  "version": "1.2.3",
  "description": "My Cool App",
  "main": "src/server.js",
  "scripts": {
    "build": "./scripts/dev/build.sh"
  },
  "author": "Fred Lackey <fred.lackey@gmail.com>",
  "dependencies": {
    "cleaner-node": "^0.10.0",
    "express": "^4.17.1"
  }}
}

The end result is I am able to build my app into a Docker image by simply running …

npm run build

… with the result of having a Docker image built using the name and version of my app …

result example

… aaaand, a quick docker images shows it is available with the Node app and version as the Docker tag.

result images

I hope this helps.

Categories
Programming

New Tool – File Line Replacer

There a new command-line tool for searching files, scanning those files for blocks of multi-line content and then replacing those blocks with different lines. Some benefits of this are…

  • works on Windows, Mac, and Linux
  • no nasty RegEx or escape characters to specify multi-line values
  • backs up original files before making changes (if desired)
  • whitespace is either ignored or preserved … your choice
  • supports text files of virtually any size

Yes, I realize there are other utilities out there that will replace text… sed, awk, etc. PowerShell will even do it if you know the switches. However, in my opinion, all of them are heavily opinionated and take the geek-first approach. I wanted something I could give to a junior or mid-level person and know they can get the job done without spending their time researching how to structure some overly complex command.

One of the best tool sets for prototyping a relational data service is…

DbSchema : for brainstorming an designing the entities;

ExpressJS : probably the best web framework for hosting the web service; and,

Sequelize ORM : to generate the models and handle the data calls.

My original need came while using Sequelize to generate model files while for a new data service. I’m not sure what caused it (maybe switching between MySQL and PostgreSQL) but the models did not include logic for auto-incrementing primary key fields. So, models ended up having this…

id: {
  type: DataTypes.INTEGER.UNSIGNED,
  allowNull: false,
  primaryKey: true
},

… when they should have had this …

id: {
  type: DataTypes.INTEGER.UNSIGNED, 
  autoIncrement: true, 
  primaryKey: true 
},

So, why not contribute to the Sequelize project and submit a fix? The short answer is that the need to search & replace multiple lines is not specific to Sequelize. As a developer, all of your work is done with text files… the source code. And, over the years, I’ve had reason to perform this type of task several times. Creating file-line-replacer allowed me to get past the hiccup and be ready for the time when I need it again, outside of Sequelize.

Installing the utility is a snap. Once you have Node on your machine, simply install the command with…

npm install -g file-line-replacer

This installs the project and allows it to be used just like any other command-line utility. Then, correcting the model files was as simple as issuing one lil’ command…

file-line-replacer \
  --search-dir "/Users/flackey/my-project/src/data/models" \
  --backup-dir "/Users/flackey/my-project/_backup" \
  --old-lines "allowNull: false,|primaryKey: true" \
  --new-lines "autoIncrement: true,|primaryKey: true" \
  --overwrite

The switches used here are the key. Here’s what they do…

--search-dir
Starting directory to search for files.

--backup-dir
Each file is stored in this location before it is modified.

--old-lines
Pipe-delimited list of text lines to search for within each file.

--new-lines
Replacement lines for each occurrence of the –old-lines

--overwrite
Ensures we know the files will be overwritten (flags are set to true by simply adding the flag name to the command).

There are tons of other flags and features listed on the project page here. Some of them include…

--source-file
Not everyone wants to search for files. You are able to specify the exact file to tweak. This is great if you want to use file-line-replacer in a BASH script.

--destination-file and --destination-dir
Maybe you don’t want to overwrite your files. Specifying the “destination” allows you to tweak your files and send them to a specific folder. This is great for working with source templates where overwriting or modifying the template is not desired.

--old-lines-file and --new-lines-file
Allows you to store the “old” and “new” lines inside of text files. You would provide a path to the file instead of supplying the actual values. This is handy for complex lines and making your scripts more “human-readable.”

--ignore-patterns and --ignore-patterns-file
The default search pattern is **/. (aka “all files, recursively”). Specifying “ignore” patterns allows more granular control on files and directories to skip.

In the grand scheme of things, I could have accomplished all of this with a BASH script. However, then I would have had more of a “uni-tasker” and not really gained anything in my developer toolbox.

Overall, I think this is a great lil’ utility. It performs a task that is quite common with developers and IT people while preventing folks from having to remember the complex syntax for outdated commands. It also allows me to personally overcome a speed bump that has been occasionally bothering me for years.

In the end, I hope whoever finds the utility is helped in some way. After all, that is why I love development so much.

Categories
Programming

Receive SMS Messages Via Email from Flowroute Phone Numbers

In today’s mobile world, people just assume every phone number is a cell phone… even if it’s clearly listed as “office” on your business card. And, in most cases, if the phone number belongs to a corporate phone system, or PBX, any text messages sent to that number are lost forever in the great bitbucket in the sky. Until now, that is! If you happen to be using Flowroute as your back-end trunking provider, you can now receive any SMS text message via email.

Here’s how to do it…

  1. Setup My Proxy App Using Docker I’ve whipped up a simple Node app to make life easy for you. In short, it receives all SMS text messages, from Flowroute, and emails them to you at either a single email address or custom “wildcard” domain. Assuming you have Docker installed a public server, install it via the following command:
docker run --name flowroute-proxy -p 3000:3000 \
    -e TO_EMAIL=bruce@batmail.com \
    -e SMTP_PASS=robin4ever \
    -e SMTP_USER=bruce@batcave.com \
    -e SMTP_HOST=smtp.batcave.com 
    fredlackey/flowroute-proxy  

The settings are all done by environment variables. A complete list is in the Docker Hub:

https://hub.docker.com/r/fredlackey/flowroute-proxy/
Of course, it will be up to you to ensure your DNS and server settings are both setup with a FQDN pointing to that docker container. You’ll also need to have an SMTP account for outgoing messages.

  1. Activate the API with Flowroute Once you have a Flowroute account, head over to their Developer Portal and click on the Get API Access button. This will bounce you over to the Flowroute portal where you will enter the URL to the Docker container you setup above:
Categories
Programming

Filter Out Docker Noise

Sometimes the smallest lil’ gem makes you feel great. For me, Docker’s --format option is one such gem. As much as I love Docker, for me, their commands’ output are far too verbose and noisy. In fact, the net is filled with complaints about this. However, the --format option makes them perfect… or closer to perfect. Even the noisiest command can be transformed…

… from this …

Before Docker Aliases

… to this …

After Docker Aliases

… in just a few extra keystrokes!

It outputs just the right amount of info to be particularly great for “4-up” or “2-up” arrangements…

Docker’s info for the ps command completely sucks and offers no info on this option. In short, you basically use it to tell Docker what columns to display. For example, with ps you have the following columns to choose from:

  • ID
  • Image
  • Command
  • RunningFor
  • Status
  • Ports
  • Names

So, for the example above, the syntax would be:

docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"  

Or, better yet, if you’re on Linux or macOS / OSX, take a few seconds and create aliases for dps and dpsa in your ~/.bash_aliases file by adding these two lines:

alias dps='docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"'  
alias dpsa='docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"'  

Enjoy… finally! =)

(I’ve added these two aliases to my dotfiles project, if you’re following that project.)

Categories
Programming

Pattern for Developing Complex Solution with NodeJS within Docker

So many of the examples out there, for both Node and Docker, show simple little applications. They might demonstrate how to create a container. Another might show how to crank up your first NodeJS app. However, I have yet to find one that demonstrates how to use these bag o’ widgets in a real world application.

Hopefully, my mean-docker example, on Github, will help show how to bring it all together. Something within me wants to create a small how-to video series surrounding it, however, there are several good folks out there already tackling the meat of this (check out Derick Bailey’s WatchMeCode for back-end goodness). So, who knows? In the meantime, here’s what this project will give you:

(BTW: Here’s the direct link on GitHub, in case you missed it: mean-docker)

Project Breakdown

mycompany-api0x
Three back-end microservices (for some reason the Node world is referring to them as “APIs”… which annoys the heck out of me) stubbed-out in NodeJS and Express.

mycompany-app
The front-end Angular app which calls into the example APIs.

mycompany-www
Your example company’s web site. Again, this is just stubbed out.

solution-a & solution-b
Two higher-level solutions containing all of the “good stuff” for Docker & NGINX.

Getting Started

There’s not much to it. Here’s what to do:

  1. Install Git & Docker on your development machine.

  2. Clone the Git repo to your machine: git clone https://github.com/FredLackey/mean-docker.git

  3. Navigate into either solution-a or solution-b (currently identical): cd ~/Source/Github/FredLackey/mean-docker/solution-a

  4. Spin up Docker and let’er do it’s magic: docker-compose up

  5. NGINX is listening to a few URLs specifically, so you may want to edit your /etc/hosts or %SYSTEM32%\drivers\etc\hosts file and add the following entries (a copy is in the provided %SOLUTION%/.docker/etc/hosts file):

127.0.0.1       mycompany.com www.mycompany.com
127.0.0.1       app.mycompany.com
127.0.0.1       api01.mycompany.com
127.0.0.1       api02.mycompany.com
127.0.0.1       api03.mycompany.com

Working With It

Automated “watchers” are already setup to handle all of the compiling, optimizing, starting, and restarting for you. Simply do your work in the typical %PROJECT%/src/server and/or %PROJECT%/src/client folders and everything else will be taken care of for you.

On a completely clean dev machine, it should take approximately three minutes for an initial build:

Example Running Blocks

If you updated your /etc/hosts or %SYSTEM32%\drivers\etc\hosts file with the names of the servers, you may check the status of each project using any web browser:

Example Project Site

… or …

Example API Test

Limitations

The goal of this project is to get you started and help demonstrate some of the concepts… getting NGINX to talk to proxy your requests, linking docker containers, automagically detecting changes, etc. That being said, it works for this purpose but it’s not an actual working solution. If you have a need for such a thing, let me know and maybe I can spend some additional time on it.

Enjoy! =)

Categories
Programming

Develop on Docker Without Slow Dependencies

It’s common knowledge that Docker’s mounted volume support on macOS is pathetically slow (click here for more info). For us Node developers, this means starting up your app is incredibly slow because of the requisite node install command. Well, here’s a quick lil’ trick to get around that slowness.

First, a quick look at the project:

uber-cool-microservice example

Long story short, I’m mapping everything in my project’s root (./) to one of the container’s volumes. This allows me to use widgets, like gulp.watch() and nodemon to automagically restart the project, or inject any new code, whenever I modifiy a file.

This is 50% of the actual problem!

Because the root of the project is being mapped to the working directory within the container, calling npm install causes node_modules to be created in the root… which is actually on the host file system. This is where Docker’s incredibly slow mounted volumes kick the project in the nads. As is, you could spend as long as five minutes waiting for your project to come up once you issue docker-compose up.

“Your Docker setup must be wrong!”

As you’ll see, Docker is quite vanilla for this lil’ project.

First, ye ‘ole Dockerfile:

FROM ubuntu:16.04

MAINTAINER "Fred Lackey" <fred.lackey@gmail.com>

RUN mkdir -p /var/www \  
    && echo '{ "allow_root": true }' > /root/.bowerrc \
    && apt-get update \
    && apt-get install -y curl git \
    && curl -sL https://deb.nodesource.com/setup_6.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates

VOLUME /var/www

EXPOSE 3000  

And, of course, the beloved docker-compose.yml:

version: '2'

services:

  uber-cool-microservice:
    build:
      context: .
    container_name: uber-cool-microservice
    command:
      bash -c "npm install && nodemon"
    volumes:
      - .:/var/www
    working_dir: /var/www
    ports:
      - "3000"

As you can see, as-is this test project is lean, mean, and works as expected…. except that the npm install is sloooooooooow.

At this point, calling npm install causes all of the project’s dependencies to be installed to the volume which, as we all know, is the host filesystem. This is where the pain comes in.

“So what’s the ‘trick’ you mentioned?”

If only we could benefit from having the root of the project mapped to the volume but somehow exclude node_modules and allow it to be written to Docker’s union file system inside of the container.

According to Docker’s docs, excluding a folder from the volume mount is not possible. Which, makes sense I guess.

However, it is actually possible!

The trick? Simple! An additional volume mount!

By adding one line to the Dockerfile:

FROM ubuntu:16.04

MAINTAINER "Fred Lackey" <fred.lackey@gmail.com>

RUN mkdir -p /var/www \  
    && echo '{ "allow_root": true }' > /root/.bowerrc \
    && apt-get update \
    && apt-get install -y curl git \
    && curl -sL https://deb.nodesource.com/setup_6.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates

VOLUME /var/www  
VOLUME /var/www/node_modules

EXPOSE 3000  

… and one line to the docker-compose.yml file …

version: '2'

services:

  uber-cool-microservice:
    build:
      context: .
    container_name: uber-cool-microservice
    command:
      bash -c "npm install && nodemon"
    volumes:
      - .:/var/www
      - /var/www/node_modules
    working_dir: /var/www
    ports:
      - "3000"

That’s it!

In case you missed it, we added:

VOLUME /var/www/node_modules  

and

    - /var/www/node_modules

Say what!?!?

In short, the additional volume causes Docker to create the internal hooks within the container (folder, etc.) and wait for it to be mounted. Since we are never mounting the folder, we basically trick Docker into just writing to the folder within the container.

The end result is we are able to mount the root of our project, take advantage of tools like gulp.watch() and nodemon, while writing the contents of node_modules to the much faster union file system.

Quick Note re: node_modules:
For some reason, while using this technique, Docker will still create the node_modules folder within the root of your project, on the host file system. It simply will not write to it.