Categories
Programming

Generating Mongo / Mongoose Models

Having come from the .NET world, I have always loved the ability to whip up a quick model diagram using the SQL Server Diagram Tool. It’s painless to model your data objects, and capture a good chunk of your business, for LOB applications. And, while in that world, I relied upon the CodeSmith Generator to spit out all sorts of documents from my database.

Alas, having moved to Mac, Linux, and MEAN Stack, all this is in the past.

… until now.

DbSchema is really what started me thinking down this line. It’s written in Java and, therefore, is cross-platform. I have used it successfully on all three platforms, to replace the SQL Server Diagram Tool, and it works flawlessly.

Here’s the cool part: unlike the M$ tool, DbSchema stores its data in good ole’ XML. So, of course, I’ve created a few tools to add some awesome sauce to it…

dbschema-parser Long story short, dbschema-parser allows you to walk the data structures using NodeJS. You may navigate from Database, to Schema, to Table, to Column, and back up again, or in any direction.

dbschema-parser-cli Since I want to use the Parser to generate files, I’m gonna need a CLI. That’s what this project brings to thy table.

dbschema-mongoose Under the hood this one is ugly as sin. However, it’s the thang that gives the two projects, above, some coolness. It basically looks at your DbSchema’s data file and spits out the equivalent Mongoose model files.

Side note…

I’m also using Keybaord Maestro, on Mac, and AutoHotkey, on Windows, to help me bang out complex data diagrams with only a few keystrokes. So, that helps a great deal.

Why Create This? In short, there’s nothing stable that provides this. DbSchema is the only tool that comes close to the stability and fluidity of the SQL Server Diagram Tool. And, as for generating models, there’s nothing out there that feeds from an elegant UI. Plus, although there’s a tonne of shtuff with Yarn and Yeoman, nothing feels fully baked.

Anywhoo, I hope this helps someone. It’s ugly. I know. If anyone shows genuine interest in it, I’ll see about extending it.

Categories
Programming

Pattern for Developing Complex Solution with NodeJS within Docker

So many of the examples out there, for both Node and Docker, show simple little applications. They might demonstrate how to create a container. Another might show how to crank up your first NodeJS app. However, I have yet to find one that demonstrates how to use these bag o’ widgets in a real world application.

Hopefully, my mean-docker example, on Github, will help show how to bring it all together. Something within me wants to create a small how-to video series surrounding it, however, there are several good folks out there already tackling the meat of this (check out Derick Bailey’s WatchMeCode for back-end goodness). So, who knows? In the meantime, here’s what this project will give you:

(BTW: Here’s the direct link on GitHub, in case you missed it: mean-docker)

Project Breakdown

mycompany-api0x
Three back-end microservices (for some reason the Node world is referring to them as “APIs”… which annoys the heck out of me) stubbed-out in NodeJS and Express.

mycompany-app
The front-end Angular app which calls into the example APIs.

mycompany-www
Your example company’s web site. Again, this is just stubbed out.

solution-a & solution-b
Two higher-level solutions containing all of the “good stuff” for Docker & NGINX.

Getting Started

There’s not much to it. Here’s what to do:

  1. Install Git & Docker on your development machine.

  2. Clone the Git repo to your machine: git clone https://github.com/FredLackey/mean-docker.git

  3. Navigate into either solution-a or solution-b (currently identical): cd ~/Source/Github/FredLackey/mean-docker/solution-a

  4. Spin up Docker and let’er do it’s magic: docker-compose up

  5. NGINX is listening to a few URLs specifically, so you may want to edit your /etc/hosts or %SYSTEM32%\drivers\etc\hosts file and add the following entries (a copy is in the provided %SOLUTION%/.docker/etc/hosts file):

127.0.0.1       mycompany.com www.mycompany.com
127.0.0.1       app.mycompany.com
127.0.0.1       api01.mycompany.com
127.0.0.1       api02.mycompany.com
127.0.0.1       api03.mycompany.com

Working With It

Automated “watchers” are already setup to handle all of the compiling, optimizing, starting, and restarting for you. Simply do your work in the typical %PROJECT%/src/server and/or %PROJECT%/src/client folders and everything else will be taken care of for you.

On a completely clean dev machine, it should take approximately three minutes for an initial build:

Example Running Blocks

If you updated your /etc/hosts or %SYSTEM32%\drivers\etc\hosts file with the names of the servers, you may check the status of each project using any web browser:

Example Project Site

… or …

Example API Test

Limitations

The goal of this project is to get you started and help demonstrate some of the concepts… getting NGINX to talk to proxy your requests, linking docker containers, automagically detecting changes, etc. That being said, it works for this purpose but it’s not an actual working solution. If you have a need for such a thing, let me know and maybe I can spend some additional time on it.

Enjoy! =)

Categories
Programming

Develop on Docker Without Slow Dependencies

It’s common knowledge that Docker’s mounted volume support on macOS is pathetically slow (click here for more info). For us Node developers, this means starting up your app is incredibly slow because of the requisite node install command. Well, here’s a quick lil’ trick to get around that slowness.

First, a quick look at the project:

uber-cool-microservice example

Long story short, I’m mapping everything in my project’s root (./) to one of the container’s volumes. This allows me to use widgets, like gulp.watch() and nodemon to automagically restart the project, or inject any new code, whenever I modifiy a file.

This is 50% of the actual problem!

Because the root of the project is being mapped to the working directory within the container, calling npm install causes node_modules to be created in the root… which is actually on the host file system. This is where Docker’s incredibly slow mounted volumes kick the project in the nads. As is, you could spend as long as five minutes waiting for your project to come up once you issue docker-compose up.

“Your Docker setup must be wrong!”

As you’ll see, Docker is quite vanilla for this lil’ project.

First, ye ‘ole Dockerfile:

FROM ubuntu:16.04

MAINTAINER "Fred Lackey" <fred.lackey@gmail.com>

RUN mkdir -p /var/www \  
    && echo '{ "allow_root": true }' > /root/.bowerrc \
    && apt-get update \
    && apt-get install -y curl git \
    && curl -sL https://deb.nodesource.com/setup_6.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates

VOLUME /var/www

EXPOSE 3000  

And, of course, the beloved docker-compose.yml:

version: '2'

services:

  uber-cool-microservice:
    build:
      context: .
    container_name: uber-cool-microservice
    command:
      bash -c "npm install && nodemon"
    volumes:
      - .:/var/www
    working_dir: /var/www
    ports:
      - "3000"

As you can see, as-is this test project is lean, mean, and works as expected…. except that the npm install is sloooooooooow.

At this point, calling npm install causes all of the project’s dependencies to be installed to the volume which, as we all know, is the host filesystem. This is where the pain comes in.

“So what’s the ‘trick’ you mentioned?”

If only we could benefit from having the root of the project mapped to the volume but somehow exclude node_modules and allow it to be written to Docker’s union file system inside of the container.

According to Docker’s docs, excluding a folder from the volume mount is not possible. Which, makes sense I guess.

However, it is actually possible!

The trick? Simple! An additional volume mount!

By adding one line to the Dockerfile:

FROM ubuntu:16.04

MAINTAINER "Fred Lackey" <fred.lackey@gmail.com>

RUN mkdir -p /var/www \  
    && echo '{ "allow_root": true }' > /root/.bowerrc \
    && apt-get update \
    && apt-get install -y curl git \
    && curl -sL https://deb.nodesource.com/setup_6.x | bash - \
    && apt-get install -y nodejs \
    && npm install -g bower gulp gulp-cli jshint nodemon npm-check-updates

VOLUME /var/www  
VOLUME /var/www/node_modules

EXPOSE 3000  

… and one line to the docker-compose.yml file …

version: '2'

services:

  uber-cool-microservice:
    build:
      context: .
    container_name: uber-cool-microservice
    command:
      bash -c "npm install && nodemon"
    volumes:
      - .:/var/www
      - /var/www/node_modules
    working_dir: /var/www
    ports:
      - "3000"

That’s it!

In case you missed it, we added:

VOLUME /var/www/node_modules  

and

    - /var/www/node_modules

Say what!?!?

In short, the additional volume causes Docker to create the internal hooks within the container (folder, etc.) and wait for it to be mounted. Since we are never mounting the folder, we basically trick Docker into just writing to the folder within the container.

The end result is we are able to mount the root of our project, take advantage of tools like gulp.watch() and nodemon, while writing the contents of node_modules to the much faster union file system.

Quick Note re: node_modules:
For some reason, while using this technique, Docker will still create the node_modules folder within the root of your project, on the host file system. It simply will not write to it.