Showing only: Originals

A Potential Solution to Our Social Network Problem

I haven’t spent much time publicly discussing the idea of alternatives to social networks. Luckily, Manton Reece did which served as a great jumping off point for this post. I won’t re-hash too much of Manton’s discussion because it’s entirely worth reading on its own but he did mention a couple things that I feel like is worth mentioning here.

Manton broke down what it would take for us, as a collective to free ourselves from the monolithic nature of our current social network landscape. He’s been on this train for a while and with good reason., the microblogging community/platform is his doing and exists because at the end of the day, we should spend more time crafting our content using less-specific and more ambitious platforms. To those who travel in the own-your-own-content circles, this is known as POSSE. 

That’s not the topic of this post, though he does touch upon that. The biggest takeaway I have from this post is that our social networks should be smaller, not larger. It’s a tough idea to wrap one’s head around when you think about why people end up on the networks the do in the first place and you have a pretty large hurdle.

Looking back on how I joined MySpace, Facebook, Twitter, Instagram, and the like, there was always some sort of motivation based on a friend or other people I knew. I frankly can’t remember how I came to learn about MySpace, but I know I joined Facebook because people I knew were joining Facebook. There wasn’t anything particularly appealing about the platform at all. What I knew was that my friends were there and that’s where I wanted to be. 

A lot of folks do not care about the nuances of social networks and said networks know this all too well. 

When we present the idea of smaller social networks (in contrast to the idea of having just a handful of very large ones), this is the challenge to beat them all. Most folks just won’t care. All they’ll find important is if the platform can get them access to their friends. 

Conversely, some will ignore this notion. My fiancée doesn’t use Twitter, Instagram, Snapchat, etc., even though I’m sure a not insignificant amount of her friends on also available there. She’s found what she needs to stay in contact with the right people on Facebook and that’s that. 

If I told her that XYZ was the coolest new social network and she should join because they care about things like privacy, ads, etc., all that would happen is the information would enter her head and disappear. She would fall into the category of desiring function over form. 

I land somewhere in the middle. I’m all for more communities of people that mean something, but it’s a very fine line to walk. Twitter and Facebook already become an echo chamber quite easily if you’re not careful. Smaller communities are even more prone to such a concept. 

Wisely, though, Manton’s overall focus is on content ownership. If Facebook died tomorrow, I’d probably lose a not insignificant amount of photos, but the ones that matter most are already in other places (iCloud backups, for example). Not everyone is so lucky. There could be years of vacation photos that by one way or another only exist on Facebook. 

In his last paragraph, Manton accurately describes a solution to social media frustration is blogging more. I entirely agree. Even if not many people read it, or you only find yourself writing once a week, take the time writing there about topics you care about, instead of retweeting everything you see (guilty!). Turn those re-tweets into posts. Expand your thoughts. If nothing else, it’ll help you form a better opinion about why you like that thing you found. 

If you’re keeping a close eye, you’ll notice that this post in itself is exactly the kind of idea Manton is describing. Taking time to write down thoughts and share them with the world on a platform you control and then share them with the world. 

Blogging will always be something I enjoy. I’ve had periods of time where I failed to discover anything meaningful I wanted to say–this happens to everyone–but even in down periods, I had topics on the brain. The easy way out is to just write a tweet. The satisfying way out is to blog. 

Medium as a Blogging Platform

All I could do was shake my head when I saw the post from John Gruber that mentioned Medium no longer allowing folks to use custom domains for their Medium-hosted publications (read: blogs). The alternative solution is to resort to a URL. That sounds fantastic. /s

It seems like only yesterday that I posted about Medium supporting custom domains, though it was really over three years ago.

While old URLs will continue to work, this flies in the face of anyone expecting any kind of long-term stability and consistency from the service and company. This change reminded me of a post from a friend of mine about how self-hosting is the most safe and sustainable option. A couple years later, that’s still the case.

Gruber sums it up well:

There is tremendous strength in independence and decentralization.

and Rafat Ali takes Medium to task (and rightfully so):

Seeing Medium take this course–into blanket depersonalization of authors and content–makes me a bit sad. I have always loved their editor and I knew they were doing something right when WordPress introduced a similarly-behaving editorial experience.

Dave Winer:

The problem isn’t cost, or the tech — it’s users. Medium has momentum, still, as the place-of-record for web writing, much as YouTube is it for video. It’s a shaky foundation. No business model, huge money already invested. Not a good situation.

Without a definite revenue stream (being a Paywall as a Service doesn’t count), I wonder where Medium plans on heading next. Maybe they’ll limit word counts? Maybe most articles will cost money–a la Wall Street Journal? Or better still… good old-fashioned ads? Who am I kidding old-fashioned ads aren’t in style, anymore. these ads will have a cargo ship’s worth of tracking tech baked in.

I joke… but it’s probably coming.

Google Violations

Dave Winer reminded me why I opted to never use AdSense again:

Emails like this from Google about “violations” on my website are really disturbing, esp since I don’t run any Google ads on my site. I wish they’d STFU about violations, or say what the violations are. I always can use a good laugh. Their email is a violation of my independence. Fuck off.

Yes. 👏

Additionally, I’m told this blog is not fit for Google AdSense, which is fine. The last time I used AdSense was 2012 to 2013, I think. I was kicked out of the program without evidence I did anything wrong, but couldn’t argue against it. Without knowing what I did, I had no idea what to fix. Google always has the last say when it comes to their services and while that’s not particularly unique, they do a great job of reminding us from time to time that we’re at their mercy and we should consider ourselves grateful that they’re letting us play in their holy sandbox. 

I brought the Carbon ad network (under the ownership of BuySellAds) on board last week, an email I was genuinely excited to receive that email. They’ve been great to work with so far, have only a handful of rules, and the advertisers they allow are top-notch. As far as serving banner ads goes, Carbon is the limit. The rest of the revenue will be made up of affiliate links for stuff I actually believe in (and use) and site memberships. Dave has no ads, and that’s entirely fine–and his prerogative. I think the right network can make a world of difference.

Plus, I’m not subjecting my visitors to unknown levels of ad-tracking. BSA/Carbon does a bit of it, yes, in order to inform their advertisers how their ads performed, but that’s about it. I appreciate clean and simple advertising and wish the rest of the world would get on board. Even a single tear would not be shed if AdSense went away tomorrow. 

I can dream.

Considering a Return to WordPress

Over the last few weeks, I’ve mulled over in my mind–several times, in fact–going back to WordPress for the simple reason that a static site, while fast and relatively secure, it hard to scale. It’s also a bit more difficult than I would like to post on a regular basis. I enjoy Forestry a lot, actually, but I’d rather use tools like MarsEdit, something I can’t do right now.

Updates to posts require multiple steps and as my post count grows, it just gets more out of control.

What’s holding me back at the moment is creating a WordPress theme (again). I didn’t really enjoy doing it the first time around. 😂

Deploying A Jekyll Static Site with Circle CI

One of the primary steps in making each iteration of this site happen is deploying its generated HTML files to my Digital Ocean server. Since these are just static files and there isn’t a CMS backing them up (in a traditional sense), there needs to be an automatic process that takes care of it after I make a change. If I had to manually push or build the site every time I added something, I’d:

  1. never do it
  2. go back to a CMS

This is where Circle CI enters the picture. Used mainly for software development, Circle CI allows those who are more inclined in the software development realm to build, test, and deploy code. If it can run on a Linux command line, Circle CI can run it.

For the grand starting price of zero and the promise of keeping code open source, I’m offered up to four concurrent builds and 25 hours of build time. As the operator of a static site powered by Jekyll, this is way more than I’ll need, but I’m glad it’s there.


  • Break down how I make this site happen with Circle CI.
  • We’ll take a look at my Circle CI config file and go over my workflow a bit.

It used to be way more complicated before I wrote this post but as I was thinking about what to write, I realized I was doing way more work than I needed to in the build process.

If you’d like to follow along, this entire site is available to browse through it in GitHub repo form and the Circle CI config file is here.

Table of Contents

The Circle CI File

Starting off first, let’s take a look at the defaults I have set:

defaults: &defaults
    - image: circleci/ruby:2.5.1-node-browsers
  working_directory: ~/repo

What you’re looking at here are values that I’ll always need, no matter how many steps, jobs, etc. I’ll end up with. Since I have only one job, this is more of my “do not touch” section in that these values will never change and only new ones will be added. (You can find a more in-depth explanation of the purpose of defaults here.)

version: 2
    <<: *defaults

Now we’re entering job territory. This is where I specify the actual tasks I need Circle CI to run. I only have one job, now–the_only_job–but if I had more, they’d be broken down like this:

version: 2
    <<: *defaults
    <<: *defaults
    <<: *defaults

Each job would call upon the defaults because Circle CI treats each job as a separate build and would need its own container. In multi-job scenarios, having a set of defaults to share across all jobs is truly a no-brainer.


Inside our job, we have a set of steps:.

Note: This is a list of tasks that should be performed by the container. Everything in this section is in the context of:

    <<: *defaults

So never mind the lack of full indentation. It saves me from repeating lines a dozen times.

Pre-game Tasks

- add_ssh_keys:
      - "69:fe:2c:df:c8:34:c5:e6:3f:6e:18:64:43:97:58:02"

The very first thing I have the container do is add an SSH key using the add_ssh_keys step. I’ve provided Circle CI with a key to the production server as a specific deploy-only user. This adds the key to the container so it can connect to the server later without me needing to provide hardcoded credentials. Doing so would be a massive security breach as my Circle CI builds are open to the public.

- checkout

Once that’s good to go, I have Circle CI checkout the latest code from the master branch of johlym/ Simple enough.

- attach_workspace:
    at: ~/repo

The third step is attach_workspace. This was more relevant when I had multiple jobs but the idea here is that we’re creating a persistent and consistent location within the job container to do all our task work. In this case, I need to make it clear that we’ll be doing all our work in ~/repo from here on out.

Cache Handling, Part 1

- restore_cache:
      - v1-bundle-{{ checksum "Gemfile.lock" }}-{{ checksum "package.json" }}

This part is important if there’s even a stretch goal of having a speedy build process. The restore_cache step looks for a cache file that we’ve already built (something we’ll do at the end) to save time with things like bundle and npm. Without this, we could spend a few minutes just installing Rubygems and Node modules. bleh.

The cache file uses MD5 hashes of the Gemfile.lock and package.json files combined. If those files never change, the MD5s won’t either, so this cache will remain valid. If I were to update a gem, for example, the cache would be invalid and Rubygems and Node modules would be installed.

One potential spot for improvement here is to break this out into two separate caches, but Circle CI doesn’t handle that well, so this’ll be fine.


- run: 
    name: Install Rubygems if necessary
    command: |
      bundle install --path vendor/bundle --jobs 4 --retry 3
- run: 
    name: Install Node modules if necessary 
    command: |
      cd ~/repo && npm install
- run: 
    name: Install Rsync
    command: |
      sudo apt install rsync

This part is pretty straight forward. I need to make sure all the required Rubygems, Node modules, and Rsync are installed. In this case, I’m making sure bundle puts everything in vendor/bundle (remember, this is relative to ~/repo since we declared that to be our workspace earlier) when it installs. The Node modules I don’t need to worry so much about. The package.json would be located in the ~/repo directory since that’s where the code was checked out to so we hop in there and get to it. It’ll plop its node_modules file at ~/repo/node_modules as a result. This is totally acceptable. Lastly, we install rsync via apt. Nothing special.

Site building

- run: 
    name: Build site
    command: |
      bundle exec jekyll build --profile --verbose --destination /home/circleci/repo/_site

For those who’ve worked with Jekyll before, this command shouldn’t come as a surprise. We’re asking Jekyll to build out the site and place it at ~/repo/_site. The --profile and --verbose flags are for CI output, only, in case there’s an error or my curiosity gets the better of me.

Site tweaking

- run:
    name: Install and run Gulp
    command: |
      cd ~/repo && npx gulp

When considering how I wanted to handle minification of HTML and JavaScript, I considered the jekyll-assets plugin, but decided against it because of the amount of overhead and work that would be required to implement it in my already moderately-sized site. This is where I decided to bring in Gulp, instead. I have a simple Gulpfile that’s set up to use a couple Gulp modules to minify all the HTML and local JavaScript. Over the 400-something pages I have, this saves me about 20% on the site size overall. Not too shabby.

You’ll notice we need to use npx here. For some reason, I was never able to get Gulp to run on its own… it would look for the gulp binary in strange places I could not control. npx allows me to run gulp wherever, so long as it can find the corresponding node_modules folder for reference. Brilliant, eh? Portable Gulp.

Server Push

- run: 
    name: Deploy to prod server if triggered via master branch change
    command: |
      if [ $CIRCLE_BRANCH = 'master' ]; then rsync -e "ssh -o StrictHostKeyChecking=no" -va --delete ~/repo/_site [email protected]:/var/www/; fi

This is pretty straight forward, as well, though it can look complicated to the untrained eye. What we’re doing is here is first checking if the branch this build is based off of is the master branch. We’ll find that value in the CIRCLE_BRANCH ENV variable. If it is not, we’ll skip this, but if it is, we’ll run rsync to push the contents of ~/repo/site over to the production Digital Ocean server. I’m using the IP here because of Cloudflare, though I have a TODO item to use a hostname instead.


For all intents and purposes, the deployment is done, but because of Cloudflare, we have one additional step to make sure everyone’s seeing the freshest code.


(we’re still in the jobs context)

- run: 
    name: Bust Cloudflare cache if triggered via master branch change
    command: |
      if [ $CIRCLE_BRANCH = 'master' ]; then 
        curl -X POST "$CLOUDFLARE_ZONE_ID/purge_cache" \
        -H "X-Auth-Email: $CLOUDFLARE_API_EMAIL" \
        -H "X-Auth-Key: $CLOUDFLARE_API_KEY" \
        -H "Content-Type: application/json" \
        --data '{"purge_everything":true}'; 

Using the Cloudflare API, we’re submitting a POST request to dump the entire cache for the DNS zone. I’ve provided Circle CI with the necessary information as ENV variables and am calling upon them here. This keeps them safe and the job step functional.

I wouldn’t recommend this for high-volume sites, but because I have Cloudflare caching just about everything combined with the fact that I maybe do this a couple times a week, this feels like the right level of effort and precision.

Cache Handling, Part 2

- save_cache:
    key: v1-bundle-{{ checksum "Gemfile.lock" }}-{{ checksum "package.json" }}
      - ~/repo/vendor/bundle
      - ~/repo/node_modules

Earlier, we called upon the generated cache. Here is where we create it if necessary. This’ll do the same check step before acting. If the cache file already exists with the same MD5s, we’ll skip creating it, but if it’s missing, we’ll build it out, making sure to capture everything from the ~/repo/vendor/bundle and ~/repo/node_modules folders we referenced with Rubygems and NPM.

Workflow Management

  version: 2
      - the_only_job

I used to have multiple jobs running in a breakout-combine pattern, and this is leftover from that. Although I only have one job, now, I didn’t want to re-craft it to not use Workflows, so I just operate with a one-job Workflow instead. XD

Wrap Up

That about does it for my overview. This process is turning out to work very well for me and I’m glad I took the time to both develop it and explain it for posterity. Over time, it’ll morph, I’m sure, but right now this feels like a really good bass to work off of.

Thanks for taking the time to read this. Cheers!

The Static Site Book

Over the last few weeks, I’ve spent a not-insignificant amount of time thinking about how I wanted to convey all the information and tips I have to share about building a static site and making it good, making it something you want to maintain, and even going so far as to sprinkle a bit of enjoyment on top.

The best way I can think of to do this in a format that makes the most sense to me and how I communicate, is via text and images in a book. Over the coming weeks, I’ll take a look at what it’ll take to self-publish a book and how I can best offer it to my visitors.

Here’s what I know I want to include for sure:

  • An overview of the concept of static sites
  • Jekyll 101
  • Reviewing common tools
  • More advanced topics like SEO, caching, and asset management
  • A few example sites and how to build them from start to finish using what we’ve just learned.

As far as a release date, I can only imagine it’ll be somewhere between the end of 2018 and all of 2019. I’ve never done this before so it’ll likely take a lot longer than more seasoned writers.

The working title is simply: The Static Site Book.

If there’s anything in particular you’d like to see, please let me know. I’d love to have the community’s help in making this happen.

A Week of Cloudflare Argo

I recently made a pretty heavy shift over to Cloudflare. The majority of assets and HTML on this site load from Cloudflare’s servers now, instead of my own. The only time this differs is when I push changes up to its GitHub repo. The last step in the build process after deploying to my server is hitting the API and requesting a complete cache dump. I could be more programmatic about it but updates aren’t frequent enough to warrant a more careful approach.

On its own, Cloudflare’s caching and CDN is great, but there’s another piece that makes it even better. Argo is Cloudflare’s smart routing feature that makes real-time choices about the best path to take from the Cloudflare POP (point of presence) to the origin (my server). This can dramatically reduce requests that do have to make their way to my San Francisco-located server.

It’s $5 a month plus $.10 per GB. Since most of the content on this site is text and a smattering of images, I’m not concerned with the price and $5 is fun money, basically. I saved that moving from DNSimple to Cloudflare for DNS, too, so there’s that.

Looking at some of the stats, now…

As reported by (the service that powers and the uptime percentage at the bottom of the page), the majority of my requests come in in no time at all, with the obvious winner being Los Angeles (closest to San Francisco, my DigitalOcean location). The outlier is France, though I’m not too concerned with it. It seems to be fluctuating.

(Update May 24, 2018: I noticed response times from had dropped to sub 100ms averages thanks to France falling in line with the rest of the countries. The only outlier at this point is Sydney’s lookup taking 3-9x longer) than the other locations.)

Moving over to Pingdom

It’s pretty obvious when Argo was enabled. I’m not sure what happened with the spike half-way between the 16th and 17th, but a response time being cut in half is amazing, even after doing zero work on the server, itself.

Lastly, the Cloudflare stats…

We see a clear difference in their metrics of response time improvements. Including the entire TLS handshake process, these sub 200ms response times in most cases is wonderful. At scale, Argo would have the potential to be mind-numbingly fast.

I’ll continue to use Argo for the foreseeable future. Cloudflare is free and works great for hobbyists like myself. Argo is $5 + $0.10/GB and I’d consider that peanuts. If it’s of any help, I’ve sent maybe 50MB through it since I signed up a week ago (remember, mostly text and a few images).

Lazy-loading Retina Images in a Jekyll Site

Something I’ve wanted to touch on ever since a couple posts ago was lazy loading images. Now that I’m trying to consciously serve 2x images for those with such a pixel density, I’m setting myself up for increased page loads and folks may never even make it that far down the page. Since there’s no need to load what one won’t see, I set out to add lazy image loading support to the blog.

This really doesn’t apply specifically to Jekyll, in fact there’s nothing about this that’s unique to Jekyll but during my searches, I initially felt I needed to find a Jekyll plugin to solve this problem. Since I had that idea, I’m certain others will, too, so I want to catch as many of those folks as I can and let them know it’s easier than that!

As far as the JavaScript goes, I’m using the vanilla-JavaScript-based lazyload.js. I just need to toss it into the footer (no point in loading it before everything else, which technically means we’re lazy-loading the lazy-loader):

  (function(w, d){
    var b = d.getElementsByTagName('body')[0];
    var s = d.createElement("script"); s.async = true;
    var v = !("IntersectionObserver" in w) ? "8.7.1" : "10.5.2";
    s.src = "" + v + "/lazyload.min.js";
    w.lazyLoadOptions = {}; // Your options here. See "recipes" for more information about async.
}(window, document));

What we’re doing here (courtesy of the lazyload.js documentation) is selecting the best version of it based on the browser. Version 8.x plays better with everything and since not all browsers support IntersectionObserver, it’s important to be backwards compatible. Version 10.x will load for Firefox, Chrome and Edge, while 8.x will load for pretty much everything else.

Now that our JavaScript is in play, the only other step is to update the image tags. I have a TextExpander snippet to help me out here, since I’m also inserting retina (@2x) images. It looks like this:

[![](){: data-src="%filltext:name=1x image%" data-srcset="%filltext:name=2x field% 2x" data-proofer-ignore}](%filltext:name=full image%){: data-lightbox="%filltext:name=lightbox tag%"}

This creates four fields, the paths to the 1x small image 2x small image, the full-size image loaded and the tag needed by lightbox.js. The finished markdown product looks like this (taken from a previous post):

[![](){: data-src="/assets/images/2018/05/07/gtmetrix-0512-sm.jpg" data-srcset="/assets/images/2018/05/07/[email protected] 2x"}](/assets/images/2018/05/07/gtmetrix-0512.jpg){: data-lightbox="image-3"}

Pretty slick, eh? With this in place, some bytes will be saved every day.

You might have noticed the data-proofer-ignore attribute. That’s to keep htmlproofer from tripping over these images because they have no src or srcset attributes defined. It’ll essentially skip them during its check. The downside is if I need to do an audit on images, I’ll have to bulk eliminate data-proofer-ignore at least temporarily.

Once this is all said and done, let the better test results come pouring in!

Easy Redirect Links Within Jekyll

I am buttoning up the final touches on my new Jekyll-powered blog. One thing I wanted to attempt without having to spin up yet another app or service was short link redirects.

My goal was to be able to take and have it redirect to a full URL of my choice. Jekyll is trigger happy with creating a page for everything. I knew I had to find some way to manage the data long term.

Forestry offers the ability to manage arbitrary data sets within Jekyll. Since I’m using it, I knew it was going to be a breeze so that’s the path I took.

I first started by creating a new file in the _data folder called shortlinks.yml. Anything I plug into this YAML file becomes an extension of the site object. My data file has three fields per entry: title, key, and destination. The title and destination are self-explanatory. The key is the short URL keyword. This is what we’ll use after /goto/ in the path.

Having these fields in mind, my shortlinks.yml would look something like this:

- title: A cool page
  key: coolpage
- title: A mediocre page
  key: mehpage

This means I can now access my links under the site.shortlinks array by iterating over it. Unfortunately, we still have a bit of a roadblock. Since everything about Jekyll is static, I can’t create a dynamic page that would access the data. Wwll, I could, like something in PHP, but I don’t want to. Instead, we’ll have to use the data as a base for a set of pages we’ll create that act as redirects.

This is like how Jekyll Collections work, except we don’t want to create the pages ahead of time, only on jekyll build. This is where data_page_generator.rb comes into play. By placing it in_ _plugins and feeding it a few settings in config.yml, we can instruct it to build pages based on shortlinks.yml.

That code would look something like this:

  - data: shortlinks
    template: redirect
    name: key
    dir: goto

Seems easy enough. Let’s break it down. data is the data file we want to use. It makes assumptions about the file type. template directs the plugin what base template to use. What that template looks like is in the next section. name is the key from earlier. This is the file name that we create within the dir directory. In this example, the redirect files land as _site/goto/key.html__, provided the base directory is_ _site.

Now that we have the configuration squared away, we need to create an _includes/redirect.html template. Since we’re doing immediate redirects, it doesn’t need to be fancy, nor does it need to have style. This will do:

<!DOCTYPE html>
<html lang="en">
  <meta http-equiv="refresh" content="0; url={{ page.destination }}" />
  <script type="text/javascript">
    window.location.href = "{{ page.destination }}"
Redirecting to {{ page.destination }}. If it doesn't load, click <a href="{{ page.destination }}" />here</a>.

data_page_generator.rb will take each object in the data file and pump the values into this template. In our example, we’re only interested in destination.

With our template set, there’s one last thing we need to do.

You might need to make a tweak to convince it that /goto/something and /goto/something.html are alike and not return a `404 (dependent on your Webserver configuration). In my case with Nginx, all I had to do was swap:

location / {
    try_files $uri $uri/ =404;


location / {
    try_files $uri $uri.html $uri/ =404;

For those following along at home, all I added was $uri.html. What we’re doing is instructing Nginx to take the presented URI–in the case of /goto/something, the URI is something and try it against something by itself, then something.html (this is what we’ve added), and finally something/ before giving up and looking for a 404 page. If it matches something, try to render it no matter what.

Now, we can push everything and give it a go. In my real-world example, I have a couple links already set up as I write this. My new favorite air purifier/filtration system is Molekule so clicking that link will take you straight there.

Long-term, I’m uncertain about this solution’s long-term scalability. This will be fine on the scale of a few hundred links as Jekyll can turn through a few hundred pages in a matter of seconds (especially when paired with Ruby >= 2.5.x). How well this works long term will come down to a couple things:

  • The effort required to manage the data file
  • The tools in place to automate the building of the Jekyll site

For the latter, I use Circle CI so I’m fine with it taking a handful of seconds or even a half-minute longer to update. For the former, I use Forestry. I haven’t pushed it to the point where it has several hundred items in the shortlinks.yml data file. The limits of its capability are unknown in this regard.

My alternative plans were to move to (using a short domain of some sort) or setting up Polr on the server. I’ll have to spend some time thinking about how I can track clickthroughs. As I wrapped this up, I pondered plopping Google Analytics on the redirect page. This allows me to measure the movements into /goto pages as clicks.

I’m happy with how this turned out. Like all things I do, there’ll be persistent tweaking involved.

An Auto-deploying Static Site with Backend

I have some nostalgia for static sites. I first learned to write any kind of code with HTML on a Packard Bell 486. I was hooked.

Fast forward a couple years and I was creating HTML sites in Allaire HomeSite (which became Adobe HomeSite and eventually Dreamweaver… R.I.P. HomeSite). It wasn’t long before I discovered CMSes–PostNuke, PHPNuke, TextPattern, MovableType, WordPress–I loved them all. Over the last half-decade though, WordPress was where I settled. The thought of going back to a static site was intriguing and I even tried it once, but I just couldn’t get it to stick.

Fast forward to now, where I’ve thrown WordPress to the side for a static option. I wrote this post to discuss my motivations behind it, especially after talking about–at length–why I left for WordPress.

Moving Parts

Beyond all else my primary motivator was complexity. As much as I’d like to think this would change, I don’t blog often, nor do I have a highly complex blog that requires a lot of input by multiple people. Naturally, one might think about going to a hosted solution like or Squarespace. Both were options I toyed with in the past and turned down as serious options. costs more money than I think it’s worth and Squarespace isn’t blog-centric. It’s a great product and you can build awesome sites that have blogs as a component, but it just doesn’t have that “this is your place to share your thoughts” vibe I’m looking for. Beyond that, I have almost 300 posts (it used to be closed to 340, but I purged a bunch that were so out-dated and broken it wasn’t worth saving them) and I didn’t feel like that much imported content is something SquareSpace could handle well.

This led me to self-host WordPress. I enjoy setting up and maintaining servers as much as the next person, but my bill was higher than it was worth. I had deployed two DigitalOcean droplets, one for the database and one for the web server. At $20 each, (4GB ram, 80GB SSD) it was getting pretty pricey. I could have scaled down and even put both functions into one server, but even then I wouldn’t have felt comfortable spending any less than $20.

I would vouch for Digital Ocean all day long, though. They’re great people to work with–I know at least one of them personally–and their UI is prefect for my needs.

I had to really think about my plan. If I was going to go back to static, I needed to pick a tool that I wouldn’t end up hating six months from now. My answer was actually several tools.

The Toolkit

I came up with three pieces to complete my static site solution:

  • Jekyll: it’s Well-known and something I’m very familiar with.
  • Circle CI: making sure what I’m pushing is legit and let me auto-deploy… no more git hooks)
  • I like using editors as much as the next guy but if I have something I want to write about, taking the extra steps to push it up to GitHub isn’t going to cut it for me.


Jekyll seemed like a no-brainer. Hugo was out of the question because I didn’t want to learn something new and others just weren’t full-featured enough. With my passable ruby and liquid knowledge, I could manipulate Jekyll to do what I want and not want to give up and throw it all away.

Circle CI

Circle CI came late in the game. I was originally going to use Netlify to deploy and host my site but I didn’t like how little control I had. Yes, it’s free as in beer, but I didn’t have any kind of insight as to the performance or overall control. Beyond that, deployment took longer than I wanted, primarily because it spends time at the final stages checking the HTML for forms it needs to process and understand. Given that I have over 300 pages–probably more than 400, actually–that was a non-starter. I’d probably use Netlify at work, but not for this project.

By having an CI tool at the ready, I could have it build the Jekyll-based site for me, check the HTML for obvious problems, then push it to my server (in fact that’s what it does–minus the HTML checking part, that’s not ready). is a tool I never heard of until a few weeks ago. It syncs with the GitHub repo that stores my site and allows me to create posts and pages, manipulate front matter templates, and do some basic media uploading. It’s not perfect, and on rare occasions it trips GitHub’s API rate limiting, but it’s been pretty solid overall. Support is good, and the free tier does exactly what I need it to, nothing more.


With these three tools in play, I have what amounts to a static site-CMS hybrid. When I click Publish on this post, Forestry will push it into my GitHub repo. From there, Circle CI will trigger when it sees a change and run my build script. The last step of that script is to rsync the built HTML site over to my server.

On my server, I have Nginx serving the pages with some effective caching and it’s smooth. By extracting WordPress and it’s PHP+MySQL clusterfest, I’ve also been able to improve load times both in the context of first-byte and the entire page.

My PageSpeed and YSlow scores are better than they used to be when running WordPress. There’s room for improvement but overall, it’s turning out pretty well when I consider the fact that I haven’t gone out of my way to make them great.

One thing I really need to work on, though, is my Lighthouse performance. This is what I would say is going to be the successor to PageSpeed. It’s really tough on sites and my performance as it’s measuring is dismal. Some of that might be the extensions Chrome is running and removing them speeds things up a bit, but I definitely have some front-loading I need to do of critical CSS and moving the rest to the back.

Overall, though, I’m happy with how things are operating. Forestry is fast, the whole thing can be done for $5, and I can feel good knowing that my site can’t be taken over thanks to a vulnerability in WordPress.

Johnathan Lyman
Kenmore, WA,
United States
blogging, design, technology, software, development, gaming, photography