Categories
Programming

Change is Good!

For the past few years, I’ve been using Gatsby as the main tech behind several of my blogs. I love the fact that I can edit simple Markdown and my site is magically updated on the server with a fresh React-based static website. Unfortunately, as cool as the end result may be, it’s far too complex and cumbersome for the end result. What’s worse is that, even with the complex back-end tooling in place and functioning flawlessly, the act of posting new content is far more tedious than what it needs to be. The end result is fewer updates and an eventually stale and boring site.

Tedious Setup

Let us break down the tech stack allowing Gatsby to appear magical…

Git (AWS CodeCommit)

Once you have the local tooling setup for your site to build, it needs to be pushed up to a code repo. Since most of what I do online is with AWS, this means sending it to CodeCommit. From a developer’s perspective, the act of pushing it up is trivial. However, what is not trivial is the fact that you end using cracking open your IDE or a local text editor to make said edits. Even if I used Cloud9 or some type of online editor, it’s still an editor and still means working with text files that need to be pulled down, edited, and committed. And, while, yes, I could stick my public content in a public Github repo, and edit it online, I’m still editing code directly in the source repo. Trust me, after that “new code smell” wears off, this process is more than annoying. With WordPress, on the other hand, I have a myriad of options… a web-based text editor with a draft mode, the ability to send an email to WordPress and have it converted to a post, etc. Of course, all of these options are non-destructive.

CI/CD (AWS CodePipleline)

For the Gatsby content to be compiled and pushed to the public web host you need to stand up some type of CI/CD process. My choice was AWS Code Pipeline, of course. And, yeah, it’s something we regularly do as developers but… dude… this is a blog! Even if I spend an hour or two standing up a new pipeline, it’s time I don’t have to spend with a CMS engine. Oh, and let us not overlook the fact that AWS charges per pipeline so there’s an extra cost involved as well. With WordPress, your data is sitting in a database, or on the drive, and is rendered on the fly when it is viewed.

Web Host (AWS Cloud Front)

Okay, I know what you’re saying. “It doesn’t matter what technology you use. You will need a host.” I’m not disputing that. And, I absolutely love AWS and Cloud Front. I’ll use them forever on projects. However, the amount of work required for using Cloud Front is simply not worth it for a personal blog. First, your domains need to be in Route 53. Next, you need to use Certificate Manager to request your certs for all of the names you will use on your blog. And then there is the configuration needed for Cloud Front, its regions, is origin rules, etc., etc., etc. Oh, and once Cloud Front is ready, you’ll need to return to Code Pipeline and Route 53 to connect all of the services. For a WordPress site you can … well … there are countless easier options. From performing a one-click deploy from AWS Marketplace, to setting up a LEMP stack on an EC2 instance, to Digital Ocean, WordPress.Com, or the other million hosts out there, almost every one of them will have you up and running in minutes without having to know anything about what’s happening under the hood.

LEMP Stack

Now, I’m not here to bash AWS or sing the praises of WordPress. This post was only ever meant to explain why the site is changing and apologize for any missing content while I swing it all over from Cloud Front. However, what I would like to offer are a few links to some helpful articles that make standing up WordPress (even multiple instances on the same box) incredibly easy. I snipped these and pulled them into Evernote as soon as I found them. And, they are what I use whenever I need to stand up a simple blog in a non-critical environment:

How to Install LEMP Stack on Ubuntu 20.04 Server/Desktop

How to Install phpMyAdmin with Nginx (LEMP) on Ubuntu 20.04 LTS

Install WordPress on Ubuntu 20.04 with Nginx, MariaDB, PHP7.4 (LEMP)l,”

Categories
Business Life Programming

Software As A Career

If you’re not in computers for a living in 2021, you’re nuts. It’s fun. It will improve your life (if you let it). And, it’s pretty dang recession proof. I’ve repeated these words countless times over the years and, in some of those questions, am often asked how to get started. So, I figured I’d sit down and write a quick "how to" page to help the people I care about tap into this world.

Step 1: Use Technology Daily
As generic or corny as it may sound, the world of computers is not something you can simply do or be in. It needs to be a part of your life. Those of us who are any good actually enjoy using technology and naturally make it part of our lives. We don’t need to decide or push ourselves to use technology. It’s just part of us and what we enjoy. Now, while I know there are many people out there who are intimidated by technology. Being intimidated is not the same as not enjoying it. You just need to find that hook… that part you actually do enjoy. Once you realize how common the various concepts are, from one widget to the next, more than likely, technology will become an intimate part of your world as well.

So, why is this first step so important? Simply put, the more you enjoy something the easier it will be… the more naturally you will gravitate to it. For example, as you use different cell phones, from different manufacturers, you will begin to notice similarities. As you move from cell phones to smart devices you will notice how similar those two worlds are. Tablets and laptops expand those device abilities even further. My point is that all of these devices basically work the same way. The more use use different devices, on a regular basis, the more organic your understanding about how they function will become. This is what makes technology "second nature" for many of us.

Computer Parts

Step 2: Build A Computer
Many of us these days just buy the cutest or shiniest laptop and call it a day. However, buying a prebuilt machine prevents you from knowing anything about what’s inside of the box. This is the same as buying a car without knowing how an engine works. And while that ignorance might be fine with your vehicle, you are probably not wanting to earn an income from your vehicle.

PCs for Dummies

Step 3: Read A Book
Let’s face it, there’s a reason book stores don’t exist anymore. Nobody reads. I know I would rather watch a video than spend 10 times as long reading a book on the same topic. However, in this case, there is one book I cannot recommend you read enough… PCs for Dummies! Don’t worry. It has lots of jokes, big fonts, and pictures. More importantly, it also has foundational information that most people skip over when they are first getting started in computers. This foundational knowledge is critical if you want to actually be good at what you are about to undertake.

Celel Center

Step 4: Take A Job in Technical Support
Have you ever called "tech support" and had someone reset your password or help you figure out how to make your printer work? That kid who answered the phone is referred to as "first line" or "first level" and is one step above "clueless"… just like you are at this point. Chances are he helped his mom install her cable modem once or twice before getting this job. Well, this kid need to become your pal… your co-worker. After your brain is overloaded with basics of making your computer work, you will have just enough knowledge to understand the terminology and help other clueless people fumble their way through logging into some company web site somewhere. So, hop onto one of the job boards (Monster, Indeed, Career Builder, etc.) and look for a job in a call center as a "First Level Technical Support Agent" (or similar pee-on title).

But, Fred, I don’t know enough to teach someone! Yes, you do. First-level call center jobs assume you’re clueless and are setup to teach you how to use their in-house software or system. Most of the time they will give you a script so you don’t need to worry about "winging it." The biggest perk is that they are filled with countless pre-pubescent know-it-all teenagers who are all too happy to show you what they know.

Support Level 1

Support Level 2

Support Level 3

Support Manager

Step 5: Start Writing Code … ANY CODE!
There’s no correct time to start building software. You just need to do it. By tackling the steps in the order outlined here, by this point you should be in an office environment, have regular access to a computer, have a few of them at home, and have the foundation you need to get started. You will have also been in the computer world long enough to know what a "language" is and have a inkling of what is being used in your world. For example, you may be in a company that uses Microsoft Excel or Access in their daily workflow. You may have a club or interest that needs a web site. Or, you may want to start a blog. The bottom line is that, by this point, you will probably find a need for something basic that needs to be created. You don’t have to quit your tech support job. Just spend some time after hours, at lunch, or on the weekends creating something from scratch. It may sound a bit generic, but, by the time you get to this step, you absolutely will know the difference between these pieces and have some idea of what you want to create. The key word here: create!

Possible Detour: Network Administrator
One tempting fork in the road, after your put in a year or two at the call center, is working with the actual hardware or networking gear. This is definitely a small detour I encourage to anyone who really wants to pursue any career in computers or software. In the same way that building a computer helped you understand how it works, working directly with many computers, in a network setting, or making them talk to each other, is a great way of learning how they communicate and gaining an understanding of what these beautiful boxes can do when they start communicating with each other. Or, even better, if you’re in a corporate setting, you will probably be able to land a job helping users face-to-face with their hardware. So, after your time in the call center, consider spending a year or two as a Network Administrator. You will gain a certain amount of empathy for end users here and become very familiar with concepts like "single points of failure" or what happens when companies decide to implement policies poorly.

Network Admin Level 1

Network Admin Level 2

Network Admin Level 3

Network Admin Level 4

Network Admin Level 5

Possible Direction: Network Engineer
There are two basic "forks in the road" when it comes to more senior paths in computers. For now, just think of them as "hardware vs software". Hardware geeks can make a ton of money working with the actual devices that make computers and networks talk to each other. This is a natural path if you find yourself enjoying the "Network Administrator" role we talked about in the last section. It can be a great living for someone who enjoys problem solving or working closely with the hardware itself. I spent time in this world and worked for some really cool companies… Sprint, Nextel, AT&T, several banks, a semiconductor company, etc… and am grateful for the time I spent "under the hood." Having a solid understanding of this end of the spectrum has really helped me over the years. Many software developers just know how to make the graphics on this computer screen do something without really understanding what’s happening behind the scenes. If you have both, then you’re golden.

Network Engineer Level 1

Network Engineer Level 2

Network Engineer Level 3

Network Engineer Level 4

Network Engineer Level 5

Step 6: Boot Camp
At this point you either want to stay in networking, making computers work together, or you want to come hang out with us cool kids actually creating something. Neither choice is correct. At this point we need to start building on your foundational knowledge and get you some education! Countless online and physical companies exist that will take you though a "boot camp" level course and teach you the basics of programming. These generally take a few months to complete and will give you a massive amount of knowledge in a very short amount of time. These are good for folks that have the ability to work and attend semi formal training sessions. Another avenue are online companies (Pluralsight, Cloud Guru, etc.) which offer self paced video tutorials that you follow along with and get your feet wet in developing software. Regardless of which you choose, these courses will teach you how to use the tools of the trade. The best part is that you will end up creating a few applications and, along the way, gain an understanding of how they work internally.

Step 7: Support Developer
Remember that tech support position you had a year or so ago? Well, it’s time to get another one. However, this time, you’ll be looking for bugs in software that some team of software developers created. Since software developers generally love creating new apps, and since fixing bugs in their older apps would not nearly be as fun or exciting, they need someone like you to dig through lines of software code (known as "syntax") and find the cause of their bugs. Basically, you’ll get paid to break things or figure out why they are broken. And, while this is technically another "first line" job, just like the tech support gig, it’s several steps up from that other role. Plus, now you’re actually part of a team helping create software!

Support Developer

Step 8 & Beyond: Software Developer
By this point you’ve gone from tinkering to actually becoming part of a team responsible for creating software. After spending some time in a Support Developer role, you will eventually be asked to create something knew. You’ve clearly seen what screw ups were made to cause problems. And, so, you know what not to do as you create new applications. The bottom line is that you will continue to progress from the Support role as your skills develop. As time goes on, people will consider you the more "senior" person with a certain skill or technology. You’ll be on autopilot by now.

Developer Level 1

Developer Level 2

Developer Level 3

Developer Level 4

Developer Level 5

Developer Level 6

Categories
Programming

Easily Dockerize Node Apps

Quick script to Dockerize and tag your Node app with the current version number without having to dig through files for values. For me, this is important as I use Docker with EC2 and ECS on AWS. Using the project version number and name, from the project.json file, allows me to automagically tag the Docker image… which, in turn, allows me to easily deploy specific versions of the app or service for various release methods (blue/green, etc.).

First, the script itself …

#! /bin/bash

main() {

  local SCRIPT_PATH="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/$(basename "${BASH_SOURCE[0]}")"
  local BASE_DIR=$(dirname $(dirname $(dirname $SCRIPT_PATH)))

  local PKG_NAME=$(node -p "require('$BASE_DIR/package.json').name")
  local PKG_VER=$(node -p "require('$BASE_DIR/package.json').version")
  local CMD="cd $BASE_DIR && docker build -t $PKG_NAME:$PKG_VER  ."

  eval "$CMD"
}

main

The last three lines or so is the good stuff …

  local PKG_NAME=$(node -p "require('$BASE_DIR/package.json').name")
  local PKG_VER=$(node -p "require('$BASE_DIR/package.json').version")
  local CMD="cd $BASE_DIR && docker build -t $PKG_NAME:$PKG_VER  ."

The first two lines load the Node package and version into variables PKG_NAME and PKG_VER. That last line creates a proper command for Docker …

docker build -t my-cool-app:1.2.3

And, finally, I call this from my package.json file …

{
  "name": "my-cool-app",
  "version": "1.2.3",
  "description": "My Cool App",
  "main": "src/server.js",
  "scripts": {
    "build": "./scripts/dev/build.sh"
  },
  "author": "Fred Lackey <fred.lackey@gmail.com>",
  "dependencies": {
    "cleaner-node": "^0.10.0",
    "express": "^4.17.1"
  }}
}

The end result is I am able to build my app into a Docker image by simply running …

npm run build

… with the result of having a Docker image built using the name and version of my app …

result example

… aaaand, a quick docker images shows it is available with the Node app and version as the Docker tag.

result images

I hope this helps.

Categories
Programming

Errors, Statuses, and Exceptions … Oh my!

Panic. AbEnd (or "abend"). GPF. Blue Screen o’ Death.

There are many names for it but I’m certain you’ve seen one of those situations where your computer throws a fit, gives up, and basically checks out on you. In the world of web programming this is essentially what a "500 error" is from an application’s standpoint. I’d like to take a moment to explain what it is and why you should never send it.

Simplified Version

For those of you who are in the "TL/DR camp", let me offer the cheat sheet version…

What are they?

Concept Level Definition
Status Request Context Overall status and reliability of the request and response.
Exception Method Scope Returned result does not match the shape of the method result.
Error Local Scope Problem or event usually preventing processing.

When to return them?

Concept Level Use When
Status Request Context Sent to the client on every response.
Exception Method Scope Handled by the calling method and translated. Never sent back.
Error Local Scope May be the cause of an exception. Only visible in the exception.

Which status code set to use?

Qualifier Status Code Range
Is your service working the way it is supposed to? 2xx
Did the client mess up what they were supposed to send? 4xx
Is your process or server dead and/or no longer reliable? 5xx

Clarification

We all do it. For some reason, certain words are used interchangeably which are actually not interchangeable. So, to help clarify the mystery, let’s address those items up front.

Context vs Scope

In short, scope is the more granular of the two while context is more encompassing. Both have various modifiers like "global scope" vs "local scope" or "data context" vs "object context". So, while they both can be broken up at an even more granular level, we need some starting point which will allow us to work from without repeating the first month of a CompSci program.

Contexts (Request Contexts)

Whether you’re discussing the aging phrase "nTier" or the new fangled "microservices" buzzword, all multi-tier solutions do the same thing. They all receive requests from one app, service, tier, or layer and then make calls to other apps, services, tiers, or layers. Since I’m speaking primarily to web developers, our "context" will be referring to the request context of a typical multi-tier solution.

When a browser-based app calls a webservice, that browser based app is the "client" and our back-end service is the "server." The relationship of these two items, working together, for that moment in time is a context. However, the moment that back-end service calls out to a different webservice, then the webservice initiating that second call becomes the "client" and that upstream webservice is considered the "server". And, following the convention we discussed with the first client-server relationship, this second client-server pair is also a context. While both calls end up being chained together, they are both performing some type of logic in their own little world where that snippet of logic is only really valid for those few nanoseconds.

Let us take a peek at this scenario using some awe-inspiring shapes and colors…

example process

In this example, we show two separate contexts. The first context (in green) begins when the doIt() function is called from within the user interface. That function makes a call to the back-end webservice’s /api/do-it-now route and is not complete until it receives a response and returns it to the operation’s caller. In this example the server has determined it requires data and invokes a search() operation to call an upstream data service to fetch the needed information. Although one may depend on the other, that second context (in purple) is completely separate and detached from the first. Because they are separate, the status codes and error numbers are only considered valid and logical within each context.

Status Codes

Before we get into the dreaded 500 code, let us imagine a more easily understood code for a moment… the beloved 401 - Unauthorized status. Receiving this status from a webservice can only mean one thing: you did not have permission to call that service. Or, as the error name specifies, you were "unauthorized" to make your request. Assume, for a moment, that the second context (the one in purple) received a 401 - Unauthorized from the upstream webservice when fetching data. While this status code may make complete sense, should probably be logged, and is something we can troubleshoot, we would never send it directly back to the user interface as this would be a lie. Think about it. If we were to tell the user interface application that it was "unauthorized" we are saying that the UI application did not have permission to call the business service at all. And, since we happily received its request via the /api/do-it-now route, and began processing said request, that is clearly not true. What we do send back will need to make sense within first context and will be entirely dependent on whether or not the /api/do-it-now function can still proceed or how critical it is for continued processing of that initial request. If the data was absolutely essential, and we cannot continue processing at all, then we need to explain why that specific call failed. Since we obviously expected the business service to be configured with the correct credentials for calling the upstream data service some better status codes may be either a 417 - Expectation Failed or maybe even a 412 - Precondition Failed.

Scope

Rounding out this first comparison is the concept of scope. In the diagram above, the internal functionality, happening within functions like doIt() or search() are happening within that method’s scope (often referred to as "local scope" for each method). Similarly, if the doIt() method relied upon a BUS_SERVICE_URL variable that other methods share, more than likely, this variable would be set in the global scope. Regardless, just keep in mind that the scope is related to the smaller internal functionality within those services or libraries.

Errors vs Exceptions

The only real similarity errors and exceptions share is that the appear at the lower scope level. Aside from that, they are completely different and not interchangeable. The best way to think of them is that errors are bad and exceptions are not (at least, that’s how they’re intended). Exceptions are used to provide an intelligent response which does not match the documented shape of a method’s response and is used within the caller’s local scope. Errors, on the other hand, mean that bad things are happening and you need to take cover.

Exceptions

Let’s look at the following snippet…

const stepA = (value) => {
  ...
  if (result === 'redrum') {
    throw new Error('No wire hangers!');
  }
  ...
};
const stepB = (value) => {
  ...
};
const stepC = (value) => {
  ...
};

export const doSomething = (value) => {
  let result = null;
  try {
    result = stepA(value);
    result = stepB(result);
    ...
  } catch (ex) {
    logger.info(90120, ex.message);
  } finally {
    if (result !== null) {
      result = doSomething(result);
    } else {
      return false;
    }
  }
  return true;
}

In the example above, the public doSomething() function calls several smaller methods and returns a final boolean indicator to the caller. It does this even when an exception is thrown. The throw is used as a way to indicate processing could not continue in that one step. It does not convey the underlying system is unstable or malfunctioning. Exceptions are simply a means of returning a synchronous result to a caller which does not match the documented and expected result. So, if the caller is about to receive something other than the normal result, then go for it! Throw it, baby!

Playing Catch

Exceptions can be either handled or unhandled. This distinction exists for a reason. Many third-party components throw exceptions excessively while others should probably leverage this functionality more. The authors of those components are throwing exceptions to communicate with you and expect you to catch and handle the scenario. Regardless of whether from a third-party component or your own, it is the responsibility of the parent function to account for exceptions, handle them, and craft a meaningful response that makes sense within the calling operation’s context. The HTTP client Axios throws an exception any time it receives any status outside of the 200 range. The authors of Axios expect these errors to be understood and handled. It is assumed processing will either continue normally or a translated and intelligent message would be returned by the parent operation should processing need to stop. Every single exception should be handled. An unhandled exception, on the other hand, means something completely catastrophic has happened and we need to get all hands on deck. However, even if this were the case, in my mind, an unhandled exception would only ever be seen once since, once we know it can happen, we will add code to ensure we gracefully recover from it in the future.

Errors

Unlike exceptions, errors are generally bad and usually indicate a critical situation. For example, your database may be up and running but return an error when trying to execute a basic query. Or you may see the term **ERROR** in place of a result set. Either either scenario, you know that something bad happened under the hood.

Error Numbers

Since we’ve already discussed status codes specifically, I guess it’s only fair to spell out what an "error number" is and when they are generally used. In short, error numbers are commonly used by developers to document where an exact situation occurred in their source code (usually an undesirable event) and are usually never put in front of the end user (or, at least, not in a very prominent manner). Quality errors will be globally unique and only appear in one specific scenario. However, keep in mind that they are attached to errors. And, since errors are generally never displayed, neither will this be (again, unless it’s very discretely and [hopefully] with a plain-English explanation as to not cause panic to an end user).

Quick recap…

Before we go on, let’s ensure we are all on the same page …

  1. Errors may pop up during normal operation and may be the thing that causes an exception to be thrown. In most situations they are bad;

  2. Exceptions are natural and should be used in local scope to indicate when a method call will not receive its expected result shape. They exist for you to use;

  3. All exceptions are handled gracefully within the scope where they occurred and are translated to something meaningful to the rest of the application. They are generally not passed along; and,

  4. Status codes have nothing to do with errors or exceptions and, instead, convey the reliability of the response to the caller.

The 500 Family

First up: Mr. 500!

His official description says it all: "The server encountered an unexpected condition which prevented it from fulfilling the request." Bascically, it’s the web server’s way of saying, "Hey, you know all of of that time you spent accounting for all of the bad things that could possibly happen? Well, this is something totally new that you never thought would happen. You should probably plan for a long day of troubleshooting."

Consider the following event handler commonly used in a middleware pattern:

app.use(function (err, req, res, next) {
  console.error(err.stack)
  res.status(err.statusCode || 500).send(err.message || 'Something broke!')
})

As the example shows, both a statusCode and message property are expected to be passed in. As the authors of our code we are the experts. Likewise, we have taken the time understand any third party or external components we may be leveraging. We therefore have an intrinsic opportunity to return an intelligent message in any scenario. This pattern assumes we have leveraged those opportunities but also continues processing should something happen which we never expected. In that unlikely scenario, a 500 status code is sent with the intention of communicating a completely unpredictable event.

The Rest of the 500 Klan

In general, the 500 series of status codes are meant to convey that something about our server is either not healthy or is broken. Some of these conditions may be temporary (like a 509 - Bandwidth Exceeded) and may resolve themselves at some point in time. Personally, I use the 501 - Not Implemented status regularly when adding a placeholder, that I don’t expect to be called (kinda like a "to do" note), or when removing logic from an older application. However, what is key here is that the status codes from 501 onward are also used intentionally when we need to intelligently communicate a condition about our server.

Opinions are like…

As with anything on this blog, please remember that these are my opinions. This is the way I like to work. And, yes, I realize I’m a bit anal at times (thanks, Mike, for pointing that out… again). After all, that is software development… a never-ending trail of opinions. Granted, our code actually does something in the end. However, along the way, we need to appease the opinion or feelings of an end-user or a business owner or, yes, even other team mates. It doesn’t mean any one of them is "wrong" or "right."

Categories
Productivity Programming

Ugly Date & File Date Organizer

Photos, screenshots, audio recordings … there are countless files that are stored (or should be stored) with the event’s date in the name. And, as it becomes easier to create these, they accumulate at an ever-increasing rate… usually in one folder without any type of organization. Of course, we could take the time to organize those folders, move files into proper folders, and delete what we don’t … oh, nevermind. Who am I kidding? Nobody does that.

For me, one simple step to greatly improve the value of these files is to organize them by their event date. This would at least help me find them if I need them. Grouping all files from the same day would help narrow down what may be relevant to that specific time frame. Sadly, there is no magical command, or even a utility, that will do this easily. And, even if one exists, it surely won’t work on all of the machines I use.

Well, now there is! Enter File Date Organizer and Ugly Date!

Of course, I chose to write these two utilities using NodeJS to ensure I can use them on macOS, Windows, and Linux. That’s a no-brainer.

Writing File Date Organizer came to a screeching halt the moment I pulled in the first few files and noticed that every utility I use to create screenshots formats the file names with a completely different convention…

Screen Shot 2016-07-29 at 6.11.png
Screenshot 2014-09-07 13.36.45.png
Snapshot-3208-2016-10-11-08-31-16.JPG

Even worse was the error I received when I tried to use MomentJS (the de facto utility for untangling these odd formats) to untangle those formats…

Deprecation warning: value provided is not in a recognized ISO format. moment construction falls back to js Date(), which is not reliable across all browsers and versions. Non ISO date formats are discouraged and will be removed in an upcoming major release. Please refer to http://momentjs.com/guides/#/warnings/js-date/ for more info. After doing a bit of research, and seeing what options exist out there for detecting date formats, I understood why MomentJS decided to pull out their detection logic. Virtually every plugin or library out there uses basic RegEx (at best) to find four-digit years, two-digit minutes, or similar and fails miserably. They seem to all want so badly to return a value that they make assumptions along the way. In the end, most of them return bad values instead of no values at all… which, in my opinion, is the worst possible scenario. This is where Ugly Date comes into play.

Ugly Date is a bit of an experiment and takes a slightly different approach to parsing. Instead of simply using a series of RegEx patterns, Ugly Date contains groups of patterns and validators with the intent on locating possible matches, within the value, and then scores and compares those results with each other to return the best pattern versus returning any qualifying pattern. The change feels like a bit of a tradeoff. In the beginning, it will mean more maintenance and adding of patterns. However, over time, it should mean a better result when detecting more diverse patterns. Basically, you supply a string and Ugly Date parses it and returns both the date as well as a slew of potentially helpful information:

{
  "date": "2015-07-09T17:33:25.000Z",
  "hasDate": true,
  "hasDay": false,
  "hasTime": true,
  "pattern": "Screen Shot YYYY-MM-DD at hh.mm.ss a",
  "value": "Screen Shot 2015-07-09 at 1.33.25 PM",
  "values": {
    "YYYY": 2015,
    "MM": 7,
    "DD": 9,
    "h": 1,
    "mm": 33,
    "ss": 25,
    "aa": "PM"
  },
  "locations": [
    {
      "formal": "YYYY-MM-DD",
      "pattern": "YYYY-MM-DD",
      "position": 12,
      "type": "DATE",
      "value": "2015-07-09",
      "values": {
        "YYYY": 2015,
        "MM": 7,
        "DD": 9
      }
    },
    {
      "formal": "hh.mm.ss a",
      "pattern": "h:mm:ss aa",
      "position": 26,
      "type": "TIME",
      "value": "1.33.25 PM",
      "values": {
        "h": 1,
        "mm": 33,
        "ss": 25,
        "aa": "PM"
      }
    }
  ]
}

Unlike its sister, File Date Organizer is far simpler than Ugly Date. There’s really nothing magical happening under the hood. You supply the source & target folders, tell it if you want to move or copy, overwrite or ignore, and let it go. In turn, it uses the logic within Ugly Date to parse each file and move them into a folder structure with a property date hierarchy:

The command itself is fairly logical with many “either / or” type of choices. Other than the source and target folder paths, the rest is somewhat a la carte. The basic command…

file-date-organizer \
  --source "/Users/flackey/Documents/Screenshots" \
  --target "/Volumes/MPHD01/Screenshots" \
  --use-name \
  --move

…can be swapped out with several other options. For example…

--move or --copy
--ignore or --overwrite
--use-created or --use-modified (for filenames not having a date)

It will also build your target folder structure using almost any of the date sections by using various --add switches. For example, adding --add-second causes the entire folder structure to be built all the way down to the seconds in the date value (ie, /YYYY/MM/dd/HH/mm/ss). There is a full list of switches on the project page. And, of course, the library can be used programmatically by pulling it into a Node project.

Granted, none of this is a perfect solution. And, Ugly Date is taking a very different approach to parsing dates compared to more traditional libraries. If nothing else, this satisfied the anal-retentive side of my brain.

Categories
Programming

New Tool – File Line Replacer

There a new command-line tool for searching files, scanning those files for blocks of multi-line content and then replacing those blocks with different lines. Some benefits of this are…

  • works on Windows, Mac, and Linux
  • no nasty RegEx or escape characters to specify multi-line values
  • backs up original files before making changes (if desired)
  • whitespace is either ignored or preserved … your choice
  • supports text files of virtually any size

Yes, I realize there are other utilities out there that will replace text… sed, awk, etc. PowerShell will even do it if you know the switches. However, in my opinion, all of them are heavily opinionated and take the geek-first approach. I wanted something I could give to a junior or mid-level person and know they can get the job done without spending their time researching how to structure some overly complex command.

One of the best tool sets for prototyping a relational data service is…

DbSchema : for brainstorming an designing the entities;

ExpressJS : probably the best web framework for hosting the web service; and,

Sequelize ORM : to generate the models and handle the data calls.

My original need came while using Sequelize to generate model files while for a new data service. I’m not sure what caused it (maybe switching between MySQL and PostgreSQL) but the models did not include logic for auto-incrementing primary key fields. So, models ended up having this…

id: {
  type: DataTypes.INTEGER.UNSIGNED,
  allowNull: false,
  primaryKey: true
},

… when they should have had this …

id: {
  type: DataTypes.INTEGER.UNSIGNED, 
  autoIncrement: true, 
  primaryKey: true 
},

So, why not contribute to the Sequelize project and submit a fix? The short answer is that the need to search & replace multiple lines is not specific to Sequelize. As a developer, all of your work is done with text files… the source code. And, over the years, I’ve had reason to perform this type of task several times. Creating file-line-replacer allowed me to get past the hiccup and be ready for the time when I need it again, outside of Sequelize.

Installing the utility is a snap. Once you have Node on your machine, simply install the command with…

npm install -g file-line-replacer

This installs the project and allows it to be used just like any other command-line utility. Then, correcting the model files was as simple as issuing one lil’ command…

file-line-replacer \
  --search-dir "/Users/flackey/my-project/src/data/models" \
  --backup-dir "/Users/flackey/my-project/_backup" \
  --old-lines "allowNull: false,|primaryKey: true" \
  --new-lines "autoIncrement: true,|primaryKey: true" \
  --overwrite

The switches used here are the key. Here’s what they do…

--search-dir
Starting directory to search for files.

--backup-dir
Each file is stored in this location before it is modified.

--old-lines
Pipe-delimited list of text lines to search for within each file.

--new-lines
Replacement lines for each occurrence of the –old-lines

--overwrite
Ensures we know the files will be overwritten (flags are set to true by simply adding the flag name to the command).

There are tons of other flags and features listed on the project page here. Some of them include…

--source-file
Not everyone wants to search for files. You are able to specify the exact file to tweak. This is great if you want to use file-line-replacer in a BASH script.

--destination-file and --destination-dir
Maybe you don’t want to overwrite your files. Specifying the “destination” allows you to tweak your files and send them to a specific folder. This is great for working with source templates where overwriting or modifying the template is not desired.

--old-lines-file and --new-lines-file
Allows you to store the “old” and “new” lines inside of text files. You would provide a path to the file instead of supplying the actual values. This is handy for complex lines and making your scripts more “human-readable.”

--ignore-patterns and --ignore-patterns-file
The default search pattern is **/. (aka “all files, recursively”). Specifying “ignore” patterns allows more granular control on files and directories to skip.

In the grand scheme of things, I could have accomplished all of this with a BASH script. However, then I would have had more of a “uni-tasker” and not really gained anything in my developer toolbox.

Overall, I think this is a great lil’ utility. It performs a task that is quite common with developers and IT people while preventing folks from having to remember the complex syntax for outdated commands. It also allows me to personally overcome a speed bump that has been occasionally bothering me for years.

In the end, I hope whoever finds the utility is helped in some way. After all, that is why I love development so much.

Categories
Programming

Bixby Killed Samsung (for Me)

I’ve been writing software now for 36 years and have been focused on mobile apps for 12 of them. And, like many geeks, I’ve had every iPhone since the day it was released. However, a couple of years ago, I switched from iPhone to Android because of Samsung Mobile’s S7 Edge. It was the first Android phone that felt completely… natural in my hand. I also switched from my Apple Watch to the S2 and then the S3 (which I feel are infinitely better than the Apple Watch). When the S8 came out, I was confident that I would never switch phones again. And then came Bixby. I Hate BIXBY so much that I switched to an LG G6 just to get rid of the Bixby button. I now have THREE brand new S8 units, my S2 watch, my S3 watch, and two Gear VR headsets sitting in a drawer collecting dust. Have I considered the Samsung Galaxy S9? Of course I have. I loved my Galaxy & Frontier devices. However, until I can completely disable Bixby, I will never go back to Samsung.

I’m curious how many feel the same way.

Categories
Programming

Receive SMS Messages Via Email from Flowroute Phone Numbers

In today’s mobile world, people just assume every phone number is a cell phone… even if it’s clearly listed as “office” on your business card. And, in most cases, if the phone number belongs to a corporate phone system, or PBX, any text messages sent to that number are lost forever in the great bitbucket in the sky. Until now, that is! If you happen to be using Flowroute as your back-end trunking provider, you can now receive any SMS text message via email.

Here’s how to do it…

  1. Setup My Proxy App Using Docker I’ve whipped up a simple Node app to make life easy for you. In short, it receives all SMS text messages, from Flowroute, and emails them to you at either a single email address or custom “wildcard” domain. Assuming you have Docker installed a public server, install it via the following command:
docker run --name flowroute-proxy -p 3000:3000 \
    -e TO_EMAIL=bruce@batmail.com \
    -e SMTP_PASS=robin4ever \
    -e SMTP_USER=bruce@batcave.com \
    -e SMTP_HOST=smtp.batcave.com 
    fredlackey/flowroute-proxy  

The settings are all done by environment variables. A complete list is in the Docker Hub:

https://hub.docker.com/r/fredlackey/flowroute-proxy/
Of course, it will be up to you to ensure your DNS and server settings are both setup with a FQDN pointing to that docker container. You’ll also need to have an SMTP account for outgoing messages.

  1. Activate the API with Flowroute Once you have a Flowroute account, head over to their Developer Portal and click on the Get API Access button. This will bounce you over to the Flowroute portal where you will enter the URL to the Docker container you setup above:
Categories
Programming

Cool Utility – Live Server

I just stumbled across one of those “it’s about time” utilities for front-end app development: Live Server. Long story short, you issue the command live-server from your application’s current directory and… well… that’s it. A browser pops open, your web app is loaded, and the lil’ utility watches for changes. Any changes that are made are instantly pushed to the browser.

Installation is ridiculously easy via NPM:

npm install -g live-server Of course, there’s a slew of command line switches and parameters to make even the geekiest geek happy:

--port=NUMBER – select port to use, default: PORT env var or 8080 --host=ADDRESS – select host address to bind to, default: IP env var or 0.0.0.0 (“any address”) --no-browser – suppress automatic web browser launching --browser=BROWSER – specify browser to use instead of system default --quiet | -q – suppress logging --verbose | -V – more logging (logs all requests, shows all listening IPv4 interfaces, etc.) --open=PATH – launch browser to PATH instead of server root --watch=PATH – comma-separated string of paths to exclusively watch for changes (default: watch everything) --ignore=PATH – comma-separated string of paths to ignore (anymatch-compatible definition) --ignorePattern=RGXP – Regular expression of files to ignore (ie .*.jade) (DEPRECATED in favor of --ignore) --middleware=PATH – path to .js file exporting a middleware function to add; can be a name without path nor extension to reference bundled middlewares in middleware folder --entry-file=PATH – serve this file (server root relative) in place of missing files (useful for single page apps) --mount=ROUTE:PATH – serve the paths contents under the defined route (multiple definitions possible) --spa – translate requests from /abc to /#/abc (handy for Single Page Apps) --wait=MILLISECONDS – (default 100ms) wait for all changes, before reloading --htpasswd=PATH – Enables http-auth expecting htpasswd file located at PATH --cors – Enables CORS for any origin (reflects request origin, requests with credentials are supported) --https=PATH – PATH to a HTTPS configuration module --proxy=ROUTE:URL – proxy all requests for ROUTE to URL --help | -h – display terse usage hint and exit --version | -v – display version and exit

If you’re building web-based apps, or even if you’re just starting out in web development, this little gem will save you a tonne of time up front.

Enjoy! =)

Categories
Programming

Filter Out Docker Noise

Sometimes the smallest lil’ gem makes you feel great. For me, Docker’s --format option is one such gem. As much as I love Docker, for me, their commands’ output are far too verbose and noisy. In fact, the net is filled with complaints about this. However, the --format option makes them perfect… or closer to perfect. Even the noisiest command can be transformed…

… from this …

Before Docker Aliases

… to this …

After Docker Aliases

… in just a few extra keystrokes!

It outputs just the right amount of info to be particularly great for “4-up” or “2-up” arrangements…

Docker’s info for the ps command completely sucks and offers no info on this option. In short, you basically use it to tell Docker what columns to display. For example, with ps you have the following columns to choose from:

  • ID
  • Image
  • Command
  • RunningFor
  • Status
  • Ports
  • Names

So, for the example above, the syntax would be:

docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"  

Or, better yet, if you’re on Linux or macOS / OSX, take a few seconds and create aliases for dps and dpsa in your ~/.bash_aliases file by adding these two lines:

alias dps='docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"'  
alias dpsa='docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.Ports}}"'  

Enjoy… finally! =)

(I’ve added these two aliases to my dotfiles project, if you’re following that project.)