Technical Blogging: Where Should I Be Writing?

I’ve had this blog since 2007 when I had a bright idea to make a small mark on the internet. Back then, there weren’t many platforms to easily contribute ones technical thoughts freely in writing as there are now. All you really had were forums and a handful of other sites in the likes of 4GuysFromRolla.com, C# Corner and LearnASP.com (to name a few that come to mind). Now I could be wrong about the accuracy of this opening statement as 2007 was a long time ago - back in much simpler times when I was a Junior Web Developer.

We have now come to a point where we’re spoilt for choice. There are multiple mediums where you have the freedom to publish your own technical writing in a more public and accessible way, the main ones being:

  • Medium
  • Dev.to
  • Hashnode.com
  • LinkedIn Articles

At present, I post some of my writing to Medium whether that is a clone of my own blog content or new material specifically for the platform. However, I’m now rethinking where I should be publishing my content as I am now seeing more of my fellow developers on Twitter posting content to dev.to, when they previously used Medium.

I really like dev.to found its approach to content curation to the developer community refreshing, which makes for very addictive reading and you can really see the passion in the writers. Should I start cross-posting there as well for more exposure? How does this affect things from an SEO standpoint where I have the same post on my blog as well as Medium and dev.to? All I know is anything I cross-post from my blog to Medium gets ranked higher in Google search results, which is to be expected.

If I’m being honest to myself, I like posting content where I’m another small cog part of a wider community as there is a higher chance in like-minded individuals gaining access to your content and in the process get involved by commenting and imparting their knowledge on your written piece. You can’t help but feel rewarded when your article gets a like, clap or comment, which in return makes you want to do the same for other contributers. This doesn’t really happen on a personal website.

When you are posting on another platform you don’t have to worry about technical issues or hosting. The only thing you need to do is write! But you have to remember that you’re writing in a platform that is not your own and any future changes will be out of your control.

As great as these other writing platforms are, you are restricted in really seeing the developers personality, which is something that speaks volumes when viewing their personal website. They present their content in their own unique way and most importantly write about things freely that, otherwise, may not be within the parameters of a third-party platform.

Final Thoughts

As I’ve noted a shift in the number of technical posts being published to dev.to, I will more than likely do the same and cross-post any relevant content from my personal site. You can’t help but feel it’s the best place to get exposure to programming related content. Having said this, I still feel there’s is space for me to also cross-post to Medium. But what I won’t do is cross-post the same content to both. This feels counter-intuitive. Use the most appropriate platform that has the highest chance of targeting the readers based on the subject matter in hand.

I don’t think I could ever stop writing within my own site as I like the freedom of expression - no strings attached. I can write about whatever I want and if there happens to be a post I’d like to also publish to the likes of either Medium or dev.to, I got the flexibility to do that as well.

Journey To GatsbyJS: We Are Live

If you’re seeing this post, then this means I have fully made the transition to a static-generated website architecture using GatsbyJS. I started this process late December last year but then started taking it seriously into the new year. It’s been a learning process getting to grips with a new framework as well as a big jump for me and my site.

Why has it been a big jump?

Everything is static. I have downsized my website footprint exponentially. All 250+ blog posts have been migrated into markdown files, so from now on, I will be writing in markdown and (with the help of Netlify) pushing new content by a simple git commit. Until now, I have always had a website that used server-side frameworks that stored all my posts in a database. It’s quite scary moving to a framework that feels quite unnatural to how I would normally build sites and the word “static” when used in relation to a website reminds me of a bygone era.

Process of Moving To Netlify

I was pleasantly surprised by how easy the transition to Netlify was. There is a vast amount of resources available that makes for good reading before making the switch to live. After linking my website Bitbucket repository to a site, the only things left to do to make it live were the following:

  • Upload a _redirects file, listing out any redirects you require Netlify to handle. For GatsbyJS sites, this will need to be added to the /static directory.
  • Setup Environment variables to allow the application to easily switch between development and production states. For example, my robots.txt is set to be indexable when only in production mode.
  • Add CNAME records to your existing domain that point to your Netlify domain. For example, surindersite.netlify.com.
  • Issue a free Let’s Encrypt SSL certificate, which is easily done within the account Domain settings.

Post live, the only thing that stumped me was the Netlify domain didn’t automatically redirect to my custom domain. This is something I thought Netlify would automatically handle once the domain records were updated. To get around this, an explicit domain 301 redirect needs to be added to your _redirects file.

# Domain Redirect
https://surinderbhomra.netlify.com/*     https://www.surinderbhomra.com/:splat    301!

New Publishing Process

Before making the switchover, I had to carry out some practice runs on how I would be updating my website just to be sure I could live with the new way of adding content. The process is now the following:

  1. Use “content/posts” branch to add a new blog post.
  2. Create a new .md file that consists of the date and slug. In my case, all my markdown files are named "2010-04-02---My-New-Post.md".
  3. Ensure all categories and tags in the markdown frontmatter is named correctly. This is an important step to ensure no unnecessary new categories or tags are created.
  4. Add any images used in the post to the site. The images should reference Imagekit.io.
  5. Check over the post locally.
  6. Push to master branch and let Netlify carry out the rest.

Out of all the steps, I have only found steps 3 and 4 to require a little effort when compared to using a CMS platform, as previously, I could select from a predefined list of categories and upload images directly. Not a deal-breaker.

Next Steps

I had a tight deadline to ensure I made the move to Netlify before my current hosting renews for another year and still have quite a bit of improvement to make. Have you seen my Google Lighthouse score!?! It’s shockingly bad due to using the same HTML markup and CSS from my old site. I focused my efforts cramming in all the functionality to mimic how my site used to work and efficiencies in keeping build times to Netlify low.

First thing on the list - rebuild website templates from the ground up.

Kentico: Exposing The SQL Generated By DocumentHelper API

Yesterday, I was frantically trying to debug why some documents weren’t getting returned when using the DocumentHelper.GetDocuments() method. Normally when this happens, I need delve deeper to see what SQL Kentico is generating via the API in order to get a little more information on where the querying could be going wrong. To do this, I perform a little “hacky” approach (purely for debugging) whereby I break the SQL within the API by insert a random character within the OrderBy or Where condition parameters.

Voila! The can see the SQL in the error page.

But it was only yesterday where I was shown a much more elegant solution by simply adding a GetFullQueryText() to your GetDocuments(), which then returns the SQL with all the parameters populated for you:

string debugQuery = DocumentHelper.GetDocuments()
                                  .OnSite(SiteContext.CurrentSiteName)
                                  .Types(DocumentTypeHelper.GetClassNames(TreeProvider.ALL_CLASSNAMES))
                                  .Path("/", PathTypeEnum.Children)
                                  .Culture(LocalizationContext.PreferredCultureCode)
                                  .OrderBy("NodeLevel ASC", "NodeOrder ASC")
                                  .GetFullQueryText();

I can’t believe I did not know this after so many years working on Kentico! How embarrassing...

Lazyload and Responsively Serve External Images In GatsbyJs

For the Gatsby version of my website, currently in development, I am serving all my images from Imagekit.io - a global image CDN. The reasons for doing this is so I will have the ultimate flexibility in how images are used within my site, which didn’t necessarily fit with what Gatsby has to offer especially when it came to how I wanted to position images within blog post content served from markdown files.

As I understand it, Gatsby Image has two methods of responsively resizing images:

  1. Fixed: Images that have a fixed width and height.
  2. Fluid: Images that stretch across a fluid container.

In my blog posts, I like to align my images (just take look at my post about my time in the Maldives) as it helps break the post up a bit. I won’t be able to achieve that look by the options provided in Gatsby. It’ll look all a little bit too stacked. The only option is to serve my images from Imagekit.io, which in the grand scheme isn’t a bad idea. I get the benefit of being able to transform images on the fly, optimisation (that can be customised through Imagekit.io dashboard) and fast delivery through its content-delivery network.

To meet my image requirements, I decided to develop a custom responsive image component that will perform the following:

  • Lazyload image when visible in viewport.
  • Ability to parse an array “srcset" sizes.
  • Set a default image width.
  • Render the image on page load in low resolution.

React Visibility Sensor

The component requires the use of "react-visibility-sensor” plugin to mimic the lazy loading functionality. The plugin notifies you when a component enters and exits the viewport. In our case, we only want the sensor to run once an image enters the viewport. By default, the sensor is always fired every time a block enters and exits the viewport, causing our image to constantly alternate between the small and large versions - something we don't want.

Thanks for a useful post by Mark Oskon, he provided a solution that extends upon the react-visibility-sensor plugin and allows us to turn off the sensor after the first reveal. I ported the code from Mark's solution in a newly created component housed in "/core/visibility-sensor.js", which I then reference into my LazyloadImage component:

import React, { Component } from "react";
import PropTypes from "prop-types";
import VSensor from "react-visibility-sensor";

class VisibilitySensor extends Component {
  state = {
    active: true
  };

  render() {
    const { active } = this.state;
    const { once, children, ...theRest } = this.props;
    return (
      <VSensor
        active={active}
        onChange={isVisible =>
          once &&
          isVisible &&
          this.setState({ active: false })
        }
        {...theRest}
      >
        {({ isVisible }) => children({ isVisible })}
      </VSensor>
    );
  }
}

VisibilitySensor.propTypes = {
  once: PropTypes.bool,
  children: PropTypes.func.isRequired
};

VisibilitySensor.defaultProps = {
  once: false
};

export default VisibilitySensor;

LazyloadImage Component

import PropTypes from "prop-types";
import React, { Component } from "react";
import VisibilitySensor from "../core/visibility-sensor"

class LazyloadImage extends Component {
    render() {
      let srcSetAttributeValue = "";
      let sanitiseImageSrc = this.props.src.replace(" ", "%20");

      // Iterate through the array of values from the "srcsetSizes" array property.
      if (this.props.srcsetSizes !== undefined && this.props.srcsetSizes.length > 0) {
        for (let i = 0; i < this.props.srcsetSizes.length; i++) {
          srcSetAttributeValue += `${sanitiseImageSrc}?tr=w-${this.props.srcsetSizes[i].imageWidth} ${this.props.srcsetSizes[i].viewPortWidth}w`;

          if (this.props.srcsetSizes.length - 1 !== i) {
            srcSetAttributeValue += ", ";
          }
        }
      }

      return (
          <VisibilitySensor key={sanitiseImageSrc} delayedCall={true} partialVisibility={true} once>
            {({isVisible}) =>
            <>
              {isVisible ? 
                <img src={`${sanitiseImageSrc}?tr=w-${this.props.widthPx}`} 
                      alt={this.props.alt}
                      sizes={this.props.sizes}
                      srcSet={srcSetAttributeValue} /> : 
                <img src={`${sanitiseImageSrc}?tr=w-${this.props.defaultWidthPx}`} 
                      alt={this.props.alt} />}
              </>
            }
          </VisibilitySensor>
      )
    }
}

LazyloadImage.propTypes = {
  alt: PropTypes.string,
  defaultWidthPx: PropTypes.number,
  sizes: PropTypes.string,
  src: PropTypes.string,
  srcsetSizes: PropTypes.arrayOf(
    PropTypes.shape({
      imageWidth: PropTypes.number,
      viewPortWidth: PropTypes.number
    })
  ),
  widthPx: PropTypes.number
}

LazyloadImage.defaultProps = {
  alt: ``,
  defaultWidthPx: 50,
  sizes: `50vw`,
  src: ``,
  widthPx: 50
}

export default LazyloadImage

Component In Use

The example below shows the LazyloadImage component used to serve a logo that will serve a different sized image with the following widths - 400, 300 and 200.

<LazyloadImage 
                src="https://ik.imagekit.io/surinderbhomra/Pages/logo-me.jpg" 
                widthPx={400} 
                srcsetSizes={[{ imageWidth: 400, viewPortWidth: 992 }, { imageWidth: 300, viewPortWidth: 768 }, { imageWidth: 200, viewPortWidth: 500 }]}
                alt="Surinder Logo" />

Useful Links

https://alligator.io/react/components-viewport-react-visibility-sensor/ https://imagekit.io/blog/lazy-loading-images-complete-guide/ https://www.sitepoint.com/how-to-build-responsive-images-with-srcset/

Journey To GatsbyJS: Beta Site Release v2

It’s taken me a little longer to make more progress as I’ve been stumped on how I would go about listing blog posts filtered by year and/or month. I’ve put extra effort in ensuring the full date is included in the URL for all my blog posts. In the process of doing this, I had to review and refactor the functions used within gatsby-node.js.

Refactoring

I noticed that I was carrying out build operations inefficiently and in some cases where they didn’t need to happen. For example, I was building individual blog post pages all over the place thinking I was required to do this in areas where I was listing blog posts. Reviewing my build operations had a positive impact and managed to reduce build times to Netlify from 2 minutes 17 seconds to 2 minutes 3 seconds. Where you are able to make build time savings, why wouldn’t you want to do this? By being efficient, you could squeeze in more builds within Netlify’s 300-minute monthly limit (based on free-tier).

Page Speed Tests

The GatsyJS build is at a point where I can start carrying out some performance tests using Google Page Insights and Lighthouse. Overall, the tests have proved more favourable when compared against my current site. The Lighthouse analysis still proves there is work to be done, however, the static-site generator architecture sets you off to a good start with minimal effort.

Google Lighthouse Stats - Current Site Current site

Google Lighthouse Stats - Gatsby Site Gatsby site

Current HTML/CSS Quality

I can see the main area of failure is the HTML and CSS build... not my strong suit. The template has inherited performance-lag remnants from my current site and even though I have cleaned it up as well as I can, it’s not ideal. At this moment, I have to focus on function over form.

Site Release Details

This version contains the following:

  • Blog post-filtering by year and/or month. For example:
    • /Blog/2019
    • /Blog/2019/12
  • Refactored build functions.
  • Removed unneeded CSS from the old template (still got more to do).

GatsbyJS Beta Site: http://surinderbhomra.netlify.com

GatsbyJS: Automatically Include Date In Blog Post Slug

There will be times where you will want to customise the slug based on fields from your markdown file. In my case, I wanted all my blog post URL's in the following format: /Blog/yyyy/MM/dd/Blog-Post-Title. There are two ways of doing this:

  1. Enter the full slug using a “slug” field within your markdown file.
  2. Use the onCreateNode() function found in the gatsby-node.js file to dynamically generate the slug.

My preference would be option 2 as it gives us the flexibility to modify the slug structure in one place when required. If for some reason we had to update our slug structure at a later date, it would be very time consuming (depending on how many markdown files you have) to update the slug field within each markdown file if we went ahead with option 1.

This post is suited for those who are storing their content using markdown files. I don’t think you will get much benefit if your Gatsby site is linked to a headless CMS, as the slugs are automatically generated within the platform.

The onCreateNode() Function

This function is called whenever a node is created or updated, which makes it the most ideal place to add the functionality we want to perform. It is found in the gatsby-node.js file

What we need to do is retrieve the fields we would like to form part of our slug by accessing the nodes frontmatter. In our case, all we require is two fields:

  1. Post Date
  2. Slug
exports.onCreateNode = ({ node, actions, getNode }) => {
    const { createNodeField } = actions
  
    if (node.internal.type === `MarkdownRemark`) {
      const relativeFilePath = createFilePath({ node, getNode, trailingSlash: false });
      const postDate = moment(node.frontmatter.date); // Use moment.js to easily change date format.
      const url = `/Blog/${postDate.format("YYYY/MM/DD")}${node.frontmatter.slug}`;

      createNodeField({
        name: `slug`,
        node,
        value: url,
      });
    }
  }

After making this change, you will need to re-run the gatsby develop command.

Journey To GatsbyJS: Beta Site Release v1

I am surprised at just how much progress I have made in replicating my site using the GatsbyJS framework. I have roughly spent around 10-12 days (not full days) getting up to speed on everything GatsbyJS and transitioning what I have learnt over to the GatsbyJS version of my site.

Initially, my progress was slow as I had to get my head around GraphQL, the process of how static pages are generated in the hierarchy I require and export my existing blog content to markdown. Having previous experience in React has definitely helped in making relatively swift progress.

What I would say to new GatsbyJS developers is to use the Gatsby Starter Default package - if you really want to understand Gatsby in its entirety. The package gives you enough functionality to understand what’s going on so you can easily make your own customisations. Using other fully functional starter packages can cause confusion and led me asking more questions when attempting to make changes. Trust me, it’s not wise to get too ahead of yourself (as admirable as that might be) in the early stages. Start simple and work your way up!

The interesting thing I noticed whilst working with GatsbyJS is when I think I am stumped from a functionality point-of-view, I find there is a plugin that does exactly what I require. GatsbyJS offers a foray of quality plugins. For example, I had issues in ordering my "preconnect" declarations within the <head> block so they resided before any styles or scripts. It seemed GatsbyJS has its own way of ordering the <head> elements. Thankfully, like always, there’s a plugin on hand to cure my woes.

Site Release Details

As of today, I have released the first version of my GatsbyJS site to Netlify. It’s by no means perfect and will be a work-in-progress for many iterations to come.

This version contains the following:

  • Implemented styling from the current site. Still rough around the edges and in no way efficiently done.
  • All images are hosted via Imagekit.io to be served efficiently via CDN with responsive capability.
  • Added custom routing for blog posts to include the date. For example, “/Blog/2020/01/01/My-Blog-Post”.
  • Posts can be filtered by Category (unstyled).
  • Posts Archive page (unstyled)
  • Implemented pagination for blog listing.
  • Added the following plugins:

Making my first publish to Netlify was completed in: 2 minutes 17 seconds. From an efficiency standpoint, I don’t know if this is good or bad. For me, 2 minutes seems like a long time. I wonder if it could be due to the 250+ markdown files I’m using for my blog posts and the multiple filtering routes. It’s also worth noting, I’m going completely static by not relying on any content management platform.

GatsbyJS Beta Site: http://surinderbhomra.netlify.com

Year In Review - 2019

I am glad to report that this year was a year of new learning. Not just about things from a technical standpoint but from a personal standpoint. I feel I started the year with a single-track mindset. However, as the year progressed I have become open to new ways of thinking and finally accepting that even though certain personal milestones I set for myself may not have been accomplished, I am content on lessons learnt from failure. Failure may suck, but it’s progression! It also gives me something to write about. :-)

2019 in Words/Phrases

Kentico 12 MVC, Umbraco, GatsbyJs, Azure Dev Ops, Maldives, Hiking, Drupal (yes I had to do that along with a bit of PHP), Cloudflare CDN Configuration, Google Lighthouse score, Headless CMS - strategic asset, Prismic, Netlify, Kontent, CaaS (Content-as-a-Service), Automated backups for personal hosting, iPad for improved productivity, A2 Hosting Issues, Writers block, New desk and office, Failing Macbook Pro battery, Considering an iPhone 11, Fondness of Port.

Site Offline and Lessons Learnt

I was welcomed with the first bit of failure in April where my website was taken offline (along with many others) for a lengthy period thanks to my previous hosting provider, A2 Hosting. They had no backups, no disaster recovery and no customer support. Their whole operation is a disaster.

Failure = Lesson learnt.

The only benefit of this experience was that it led me to a chain of events to reassess how I host my site and come to the realisation just how important my online presence is to me. Luckily, I was able to get back up and running by moving hosting provider. Thank god I had a recent enough backup to do this.

Popular Posts of The Year

This year I have written 26 posts (including this one). I've delved into my Google Analytics and picked a handful of gems where I believe in my own small way have made an impact:

I think my crowning glory is Google classing my post about “Switching Branches in TortoiseGit" as a featured snippet. So if anyone searches for something along those search terms, they will see the following as a top post. I don't know how long this will last, but I'll take it!

Google Featured Snippet - Switch Branches In TortoiseGit

Statistics

My site statistics have increased considerably, which has been amazing. However, I have to remain realistic and grounded in what to expect in future comparisons. I think the figures may plateau over the next year.

The stats I post below is based on organic searches and I haven’t actively posted links on my social. Maybe this is something I should get back into doing for further exposure.

2018/2019 Comparison:

  • Users: +50%
  • Page Views: +47%
  • New Users: +48%
  • Bounce Rate: +0.8%
  • Search Console Total Clicks: +251%
  • Search Console Impressions: +280%
  • Search Console Page Position: -15%

Syndicut

I am so close to hitting the all-time milestone for the length of service when compared to any company I’ve worked in previously. In fact, I have already surpassed any previous record three-fold. As of next July, it will be 10 years! Wowsers!

I can see the coming year will be a time to reassess how we approach our technical projects to accommodate new markets, technologies and frameworks. It’s always an exciting time to be a developer at Syndicut, but I am looking forward to sinking my teeth into new challenges ahead!

Greater Emphasis on CaaS (Content-as-a-Service)

Over the last year, I have noticed a shift in how content is managed. Even though I have been busy working away on headless CMS’s at Syndicut over the last few years, it seems to be the year where its properly been given global traction and market awareness. You can just tell by the number of events for both developers and clients.

Through this exposure, clients are becoming technically savvy and questioning how and where their data is housed. Content is a strategic asset that should no longer be siloed, but distributed across multiple mediums, for example:

  • Website
  • Mobile Applications
  • Digital Billboards

The key to a successful Headless CMS integration is not the development of an application, but the content-modelling. Based on what I have seen from other implementations, sufficient content-modelling always seems to be missed. Data-architecture is key to ensure data is scalable across all mediums.

I am also a Kontent (previously known as Kentico Cloud) Certified Developer.

iPad and Now iPhone???

This subject matter truly alarms me.

I’ve been considering getting an iPhone 11 after Google released their dismal spec of the Pixel 4 and on top of that, finding that I am really happy with my iPad Air purchase. This is coming from an Android fan!

I have no regrets in getting an iPad, especially when combined with the a keyboard and Apple Pen. It makes you a productivity powerhouse! We live in a world where finding quality Android tablets with sufficient accessories is difficult to find.

If I can get my head around being locked into the Apple eco-system, I might make the move. Why oh why is Google putting me in such a position. :-S

I guess we’ll have to wait till I write my “Year In Review - 2020” on what I ended up doing.

Coffee Tables and Desk Purchased!

In my last year in review, I jokingly added a footnote stating I needed to get a coffee table set and desk. I can mark a massive tick against these two items for a job well done. In fact, I went a step further with purchasing a desk by converting a part of a room into a small office with the following additions:

  • Ikea desk chair
  • Corner shelves
  • An array of potted plants
  • Laptop stand
  • Very cool desk lamp
  • Nice grey rug with some pleasant subtle abstract patterns

It’s now a perfect place where I can work and write without any distractions. The room still requires some finishing touches - in my case, it’s always the small jobs that take the longest!

I was surprised at how productive I’ve been by finally having a small office setup. Gone are the days where I would be reclined on my sofa in front of the TV working away on my laptop.

Redeveloping My Site

It seems like I can’t go through a year without looking into redeveloping my site. It’s the curse when being exposed to working with new technologies and platforms. I like to ensure I am moving with the times too.

I have been considering ditching Kentico as my content-management platform and opting for the static-generator route, such as Gatsby. Resulting in simplified platform-agnostic hosting, site architecture and with the added benefit of portability. I am in the middle of replicating my site functionality using Gatsby to see if it’s a feasible option.

I will be posting links to my “in progress” site hosted on Netlify in my “Journey To GatsbyJs” Series, where I will be writing about things I’ve learnt trying to replicate my site functionality.

Journey To GatsbyJS: Exporting Kentico Blog Posts To Markdown Files

The first thing that came into my head when testing the waters to start the process of moving over to Gatsby was my blog post content. If I could get my content in a form a Gatsby site accepts then that's half the battle won right there, the theory being it will simplify the build process.

I opted to go down the local storage route where Gatsby would serve markdown files for my blog post content. Everything else such as the homepage, archive, about and contact pages can be static. I am hoping this isn’t something I will live to regret but I like the idea my content being nicely preserved in source control where I have full ownership without relying on a third-party platform.

My site is currently built on the .NET framework using Kentico CMS. Exporting data is relatively straight-forward, but as I transition to a somewhat content-less managed approach, I need to ensure all fields used within my blog posts are transformed appropriately into the core building blocks of my markdown files.

A markdown file can carry additional field information about my post that can be declared at the start of the file, wrapped by triple dashes at the start and end of the block. This is called frontmatter.

Here is a snippet of one of my blog posts exported to a markdown file:

---
title: "Maldives and Vilamendhoo Island Resort"
summary: "At Vilamendhoo Island Resort you are surrounded by serene beauty wherever you look. Judging by the serendipitous chain of events where the stars aligned, going to the Maldives has been a long time in the coming - I just didn’t know it."
date: "2019-09-21T14:51:37Z"
draft: false
slug: "/Maldives-and-Vilamendhoo-Island-Resort"
disqusId: "b08afeae-a825-446f-b448-8a9cae16f37a"
teaserImage: "/media/Blog/Travel/VilamendhooSunset.jpg"
socialImage: "/media/Blog/Travel/VilamendhooShoreline.jpg"
categories: ["Surinder's Log"]
tags: ["holiday", "maldives"]
---

Writing about my holiday has started to become a bit of a tradition (for those that are worthy of such time and effort!) which seem to start when I went to [Bali last year](/Blog/2018/07/06/My-Time-At-Melia-Bali-Hotel). 
I find it's a way to pass the time in airports and flights when making the return journey home. So here's another one...

Everything looks well structured and from the way I have formatted the date, category and tags fields, it will lend itself to be quite accommodating for the needs of future posts. I made the decision to keep the slug value void of any directory structure to give me the flexibility on dynamically creating a URL structure.

Kentico Blog Posts to Markdown Exporter

The quickest way to get the content out was to create a console app to carry out the following:

  1. Loop through all blog posts in post date descending.
  2. Update all images paths used as a teaser and within the content.
  3. Convert rich text into markdown.
  4. Construct frontmatter key-value fields.
  5. Output to a text file in the following naming convention: “yyyy-MM-dd---Post-Title.md”.

Tasks 2 and 3 will require the most effort…

When I first started using Kentico, all references to images were made directly via the file path and as I got more familiar with Kentico, this was changed to use permanent URLs. Using permanent URL’s caused the link to an image to change from "/Surinder/media/Surinder/myimage.jpg", to “/getmedia/27b68146-9f25-49c4-aced-ba378f33b4df /myimage.jpg?width=500”. I need to create additional checks to find these URL’s and transform into a new path.

Finding a good .NET markdown converter is imperative. Without this, there is a high chance the rich text content would not be translated to a satisfactorily standard, resulting in some form of manual intervention to carry out corrections. Combing through 250 posts manually isn’t my idea of fun! :-)

I found the ReverseMarkdown .NET library allowed for enough options to deal with Rich Text to Markdown conversion. I could set in the conversion process to ignore HTML that couldn’t be transformed thus preserving content.

Code

using CMS.DataEngine;
using CMS.DocumentEngine;
using CMS.Helpers;
using CMS.MediaLibrary;
using Export.BlogPosts.Models;
using ReverseMarkdown;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Text;
using System.Text.RegularExpressions;

namespace Export.BlogPosts
{
    class Program
    {
        public const string SiteName = "SurinderBhomra";
        public const string MarkdownFilesOutputPath = @"C:\Temp\BlogPosts\";
        public const string NewMediaBaseFolder = "/media";
        public const string CloudImageServiceUrl = "https://xxxx.cloudimg.io";

        static void Main(string[] args)
        {
            CMSApplication.Init();

            List<BlogPost> blogPosts = GetBlogPosts();

            if (blogPosts.Any())
            {
                foreach (BlogPost bp in blogPosts)
                {
                    bool isMDFileGenerated = CreateMDFile(bp);

                    Console.WriteLine($"{bp.PostDate:yyyy-MM-dd} - {bp.Title} - {(isMDFileGenerated ? "EXPORTED" : "FAILED")}");
                }

                Console.ReadLine();
            }
        }

        /// <summary>
        /// Retrieve all blog posts from Kentico.
        /// </summary>
        /// <returns></returns>
        private static List<BlogPost> GetBlogPosts()
        {
            List<BlogPost> posts = new List<BlogPost>();

            InfoDataSet<TreeNode> query = DocumentHelper.GetDocuments()
                                               .OnSite(SiteName)
                                               .Types("SurinderBhomra.BlogPost")
                                               .Path("/Blog", PathTypeEnum.Children)
                                               .Culture("en-GB")
                                               .CombineWithDefaultCulture()
                                               .NestingLevel(-1)
                                               .Published()
                                               .OrderBy("BlogPostDate DESC")
                                               .TypedResult;

            if (!DataHelper.DataSourceIsEmpty(query))
            {
                foreach (TreeNode blogPost in query)
                {
                    posts.Add(new BlogPost
                    {
                        Guid = blogPost.NodeGUID.ToString(),
                        Title = blogPost.GetStringValue("BlogPostTitle", string.Empty),
                        Summary = blogPost.GetStringValue("BlogPostSummary", string.Empty),
                        Body = RichTextToMarkdown(blogPost.GetStringValue("BlogPostBody", string.Empty)),
                        PostDate = blogPost.GetDateTimeValue("BlogPostDate", DateTime.MinValue),
                        Slug = blogPost.NodeAlias,
                        DisqusId = blogPost.NodeGUID.ToString(),
                        Categories = blogPost.Categories.DisplayNames.Select(c => c.Value.ToString()).ToList(),
                        Tags = blogPost.DocumentTags.Replace("\"", string.Empty).Split(',').Select(t => t.Trim(' ')).Where(t => !string.IsNullOrEmpty(t)).ToList(),
                        SocialImage = GetMediaFilePath(blogPost.GetStringValue("ShareImageUrl", string.Empty)),
                        TeaserImage = GetMediaFilePath(blogPost.GetStringValue("BlogPostTeaser", string.Empty))
                    });
                }
            }

            return posts;
        }

        /// <summary>
        /// Creates the markdown content based on Blog Post data.
        /// </summary>
        /// <param name="bp"></param>
        /// <returns></returns>
        private static string GenerateMDContent(BlogPost bp)
        {
            StringBuilder mdBuilder = new StringBuilder();

            #region Post Attributes

            mdBuilder.Append($"---{Environment.NewLine}");
            mdBuilder.Append($"title: \"{bp.Title.Replace("\"", "\\\"")}\"{Environment.NewLine}");
            mdBuilder.Append($"summary: \"{HTMLHelper.HTMLDecode(bp.Summary).Replace("\"", "\\\"")}\"{Environment.NewLine}");
            mdBuilder.Append($"date: \"{bp.PostDate.ToString("yyyy-MM-ddTHH:mm:ssZ")}\"{Environment.NewLine}");
            mdBuilder.Append($"draft: {bp.IsDraft.ToString().ToLower()}{Environment.NewLine}");
            mdBuilder.Append($"slug: \"/{bp.Slug}\"{Environment.NewLine}");
            mdBuilder.Append($"disqusId: \"{bp.DisqusId}\"{Environment.NewLine}");
            mdBuilder.Append($"teaserImage: \"{bp.TeaserImage}\"{Environment.NewLine}");
            mdBuilder.Append($"socialImage: \"{bp.SocialImage}\"{Environment.NewLine}");

            #region Categories

            if (bp.Categories?.Count > 0)
            {
                CommaDelimitedStringCollection categoriesCommaDelimited = new CommaDelimitedStringCollection();

                foreach (string categoryName in bp.Categories)
                    categoriesCommaDelimited.Add($"\"{categoryName}\"");

                mdBuilder.Append($"categories: [{categoriesCommaDelimited.ToString()}]{Environment.NewLine}");
            }

            #endregion

            #region Tags

            if (bp.Tags?.Count > 0)
            {
                CommaDelimitedStringCollection tagsCommaDelimited = new CommaDelimitedStringCollection();

                foreach (string tagName in bp.Tags)
                    tagsCommaDelimited.Add($"\"{tagName}\"");

                mdBuilder.Append($"tags: [{tagsCommaDelimited.ToString()}]{Environment.NewLine}");
            }

            #endregion

            mdBuilder.Append($"---{Environment.NewLine}{Environment.NewLine}");

            #endregion

            // Add blog post body content.
            mdBuilder.Append(bp.Body);

            return mdBuilder.ToString();
        }

        /// <summary>
        /// Creates files with a .md extension.
        /// </summary>
        /// <param name="bp"></param>
        /// <returns></returns>
        private static bool CreateMDFile(BlogPost bp)
        {
            string markdownContents = GenerateMDContent(bp);

            if (string.IsNullOrEmpty(markdownContents))
                return false;

            string fileName = $"{bp.PostDate:yyyy-MM-dd}---{bp.Slug}.md";
            File.WriteAllText($@"{MarkdownFilesOutputPath}{fileName}", markdownContents);

            if (File.Exists($@"{MarkdownFilesOutputPath}{fileName}"))
                return true;

            return false;
        }

        /// <summary>
        /// Gets the full relative path of an file based on its Permanent URL ID. 
        /// </summary>
        /// <param name="filePath"></param>
        /// <returns></returns>
        private static string GetMediaFilePath(string filePath)
        {
            if (filePath.Contains("getmedia"))
            {
                // Get GUID from file path.
                Match regexFileMatch = Regex.Match(filePath, @"(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}");

                if (regexFileMatch.Success)
                {
                    MediaFileInfo mediaFile = MediaFileInfoProvider.GetMediaFileInfo(Guid.Parse(regexFileMatch.Value), SiteName);

                    if (mediaFile != null)
                        return $"{NewMediaBaseFolder}/{mediaFile.FilePath}";
                }
            }

            // Return the file path and remove the base file path.
            return filePath.Replace("/SurinderBhomra/media/Surinder", NewMediaBaseFolder);
        }

        /// <summary>
        /// Convert parsed rich text value to markdown.
        /// </summary>
        /// <param name="richText"></param>
        /// <returns></returns>
        public static string RichTextToMarkdown(string richText)
        {
            if (!string.IsNullOrEmpty(richText))
            {
                #region Loop through all images and correct the path

                // Clean up tilda's.
                richText = richText.Replace("~/", "/");

                #region Transform Image Url's Using Width Parameter

                Regex regexFileUrlWidth = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})(\?width=([0-9]*))", RegexOptions.Multiline | RegexOptions.IgnoreCase);

                foreach (Match fileUrl in regexFileUrlWidth.Matches(richText))
                {
                    string width = fileUrl.Groups[4] != null ? fileUrl.Groups[4].Value : string.Empty;
                    string newMediaUrl = $"{CloudImageServiceUrl}/width/{width}/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";

                    if (newMediaUrl != string.Empty)
                        richText = richText.Replace(fileUrl.Value, newMediaUrl);
                }

                #endregion

                #region Transform Generic File Url's

                Regex regexGenericFileUrl = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})", RegexOptions.Multiline | RegexOptions.IgnoreCase);

                foreach (Match fileUrl in regexGenericFileUrl.Matches(richText))
                {
                    // Construct media URL required by image hosting company - CloudImage. 
                    string newMediaUrl = $"{CloudImageServiceUrl}/cdno/n/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";

                    if (newMediaUrl != string.Empty)
                        richText = richText.Replace(fileUrl.Value, newMediaUrl);
                }

                #endregion

                #endregion

                Config config = new Config
                {
                    UnknownTags = Config.UnknownTagsOption.PassThrough, // Include the unknown tag completely in the result (default as well)
                    GithubFlavored = true, // generate GitHub flavoured markdown, supported for BR, PRE and table tags
                    RemoveComments = true, // will ignore all comments
                    SmartHrefHandling = true // remove markdown output for links where appropriate
                };

                Converter markdownConverter = new Converter(config);

                return markdownConverter.Convert(richText).Replace(@"[!\", @"[!").Replace(@"\]", @"]");
            }

            return string.Empty;
        }

        /// <summary>
        /// Returns media url without query string values.
        /// </summary>
        /// <param name="mediaUrl"></param>
        /// <returns></returns>
        private static string ClearQueryStrings(string mediaUrl)
        {
            if (mediaUrl == null)
                return string.Empty;

            if (mediaUrl.Contains("?"))
                mediaUrl = mediaUrl.Split('?').ToList()[0];

            return mediaUrl.Replace("~", string.Empty);
        }
    }
}

There is a lot going on here, so let's do a quick breakdown:

  1. GetBlogPosts(): Get all blog posts from Kentico and parse them to a “BlogPost” class object containing all the fields we want to export.
  2. GetMediaFilePath(): Take the image path and carry out all the transformation required to change to a new file path. This method is used in GetBlogPosts() and RichTextToMarkdown() methods.
  3. RichTextToMarkdown(): Takes rich text and goes through a transformation process to relink images in a format that will be accepted by my image hosting provider - Cloud Image. In addition, this is where ReverseMarkdown is used to finally convert to markdown.
  4. CreateMDFile(): Creates the .md file based on the blog posts found in Kentico.

Delving Into The World of Gatsby and Static Site Generators

I have been garnering interest in a static-site generator architecture ever since I read Paul Stamatiou’s enlightening post about how he built his website. I am always intrigued to know what goes on behind the scenes of someone's website, especially bloggers and the technology stack they use.

Paul built his website using Jekyll. In his post, he explains his reasoning to why he decided to go down this particular avenue - with great surprise resonated with me. In the past, I always felt the static-site generator architecture was too restrictive and coming from a .NET background, I felt comfortable knowing my website was built using some form of server-side code connected to a database, allowing me infinite possibilities. Building a static site just seemed like a backwards approach to me. Paul’s opening few paragraphs changed my perception:

..having my website use a static site generator for a few reasons...I did not like dealing with a dynamic website that relied on a typical LAMP stack. Having a database meant that MySQL database backups was mission critical.. and testing them too. Losing an entire blog because of a corrupt database is no fun...

...I plan to keep my site online for decades to come. Keeping my articles in static files makes that easy. And if I ever want to move to another static site generator, porting the files over to another templating system won't be as much of a headache as dealing with a database migration.

And then it hit me. It all made perfect sense!

Enter The Static Site Generator Platform

I’ll admit, I’ve come late to the static site party and never gave it enough thought, so I decided to pick up the slack and researched different static-site generator frameworks, including:

  • Jekyll
  • Hugo
  • Gatsby

Jekyll runs on the Ruby language, Hugo on Go (invented by Google) and Gatsby on React. After some tinkering with each, I opted to invest my time in learning Gatsby. I was very tempted by Hugo, (even if it meant learning Go) as it is more stable and requires less build time which is important to consider for larger websites, but it fundamentally lacks an extensive plugin ecosystem.

Static Generator of Choice: Gatsby

Gatsby comes across as a mature platform offering a wide variety of useful plugins and tools to enhance the application build. I’m already familiar coding in React from when I did some React Native work in the past, which I haven’t had much chance to use again. Being built on React, it gave me an opportunity to dust the cobwebs off and improve both my React and (in the process) JavaScript skillset.


I was surprised by just how quickly I managed to get up and running. There is nothing you have to configure unlike when working with content-management platforms. In fact, I decided to create a Gatsby version of this very site. Within a matter of days, I was able to replicate the following website functionality:

  • Listing blog posts.
  • Pagination.
  • Filtering by category and tag.
  • SEO - managing page titles, description, open-graph tags, etc.

There I such a wealth of information and support online to help you along.

I am very tempted to move over to Gatsby.

When to use Static or Dynamic?

Static site generators isn’t a framework that is suited for all web application scenarios. It’s more suited for small/medium-sized sites where there isn't a requirement for complex integrations. It works best with static content that doesn’t require changes to occur based on user interaction.

The only thing that comes into question is the build time where you have pages of content in their thousands. Take Gatsby, for example...

I read one site containing around 6000 posts, resulting in a build time of 3 minutes. The build time can vary based on the environment Gatsby is running on and build quality. I personally try to ensure best case build time by:

  • Sufficiently spec'd hardware is used - laptop and hosting environment.
  • Keeping the application lean by utilising minimal plugins.
  • Write efficient JavaScript.
  • Reusing similar GraphQL queries where the same data is being requested more than once in different components, pages and views.

We have to accept the more pages a website has, the slower the build time will be. Hugo should get an honourable mention here as the build speed beats its competition hands down.

Static sites have their place in any project as long as you conform within the confines of the framework. If you have a feeling that your next project will at some point (or immediately) require some form of fanciful integration, dynamic is the way to go. Dynamic gives you unlimited possibilities and will always be the safer option, something static will never measure against.

The main strengths of static sites are that they’re secure and perform well in Lighthouse scoring potentially resulting favourably in search engines.

Avenue’s for Adding Content

The very cool thing is you have the ability to hook up to your content via two options:

  1. Markdown files
  2. Headless CMS

Markdown is such a pleasant and efficient way to write content. It’s all just plain text written with the help of a simplified notation that is then transformed into HTML. The crucial benefit of writing in markdown is its portability and clean output. If in the future I choose to jump to a different static framework, it’s just a copy and paste job.

A more acceptable client solution is to integrate with a Headless CMS where a more familiar Rich Text content editing and the storage of media is available to hand.

You can also create custom-built pages without having to worry about the data layer, for example, landing pages.

Final Thoughts

I love Gatsby and it’s been a very long time since I have been excited by a different approach to developing websites. I am very tempted to make the move as this framework is made for sites like mine, providing I can get solutions to areas in Gatsby where I currently lack knowledge, such as:

  • Making URL’s case-insensitive.
  • 301 redirects.
  • Serving different responsive images within the post content. I understand Gatsby does this at templating-level but cannot currently see a suitable approach for media housed inside content.

I’m sure the above points are achievable and as I have made quite swift progress on replicating my site in Gatsby, if all goes to plan, I could go the full hog. Meaning I don’t plan on serving content from any form of content-management system and cementing myself in Gatsby.

At one point I was planning on moving over to a headless CMS, such as Kontent or Prismic. That plan was swiftly scrapped when there didn’t seem to be an avenue of migrating my existing content unless a Business or Professional plan is purchased, which came to a high cost.

I will be documenting my progress in follow up posts. So watch this space!