Lazyload and Responsively Serve External Images In GatsbyJs

For the Gatsby version of my website, currently in development, I am serving all my images from - a global image CDN. The reasons for doing this is so I will have the ultimate flexibility in how images are used within my site, which didn’t necessarily fit with what Gatsby has to offer especially when it came to how I wanted to position images within blog post content served from markdown files.

As I understand it, Gatsby Image has two methods of responsively resizing images:

  1. Fixed: Images that have a fixed width and height.
  2. Fluid: Images that stretch across a fluid container.

In my blog posts, I like to align my images (just take look at my post about my time in the Maldives) as it helps break the post up a bit. I won’t be able to achieve that look by the options provided in Gatsby. It’ll look all a little bit too stacked. The only option is to serve my images from, which in the grand scheme isn’t a bad idea. I get the benefit of being able to transform images on the fly, optimisation (that can be customised through dashboard) and fast delivery through its content-delivery network.

To meet my image requirements, I decided to develop a custom responsive image component that will perform the following:

  • Lazyload image when visible in viewport.
  • Ability to parse an array “srcset" sizes.
  • Set a default image width.
  • Render the image on page load in low resolution.

React Visibility Sensor

The component requires the use of "react-visibility-sensor” plugin to mimic the lazy loading functionality. The plugin notifies you when a component enters and exits the viewport. In our case, we only want the sensor to run once an image enters the viewport. By default, the sensor is always fired every time a block enters and exits the viewport, causing our image to constantly alternate between the small and large versions - something we don't want.

Thanks for a useful post by Mark Oskon, he provided a solution that extends upon the react-visibility-sensor plugin and allows us to turn off the sensor after the first reveal. I ported the code from Mark's solution in a newly created component housed in "/core/visibility-sensor.js", which I then reference into my LazyloadImage component:

import React, { Component } from "react";
import PropTypes from "prop-types";
import VSensor from "react-visibility-sensor";

class VisibilitySensor extends Component {
  state = {
    active: true

  render() {
    const { active } = this.state;
    const { once, children, ...theRest } = this.props;
    return (
        onChange={isVisible =>
          once &&
          isVisible &&
          this.setState({ active: false })
        {({ isVisible }) => children({ isVisible })}

VisibilitySensor.propTypes = {
  once: PropTypes.bool,
  children: PropTypes.func.isRequired

VisibilitySensor.defaultProps = {
  once: false

export default VisibilitySensor;

LazyloadImage Component

import PropTypes from "prop-types";
import React, { Component } from "react";
import VisibilitySensor from "../core/visibility-sensor"

class LazyloadImage extends Component {
    render() {
      let srcSetAttributeValue = "";
      let sanitiseImageSrc = this.props.src.replace(" ", "%20");

      // Iterate through the array of values from the "srcsetSizes" array property.
      if (this.props.srcsetSizes !== undefined && this.props.srcsetSizes.length > 0) {
        for (let i = 0; i < this.props.srcsetSizes.length; i++) {
          srcSetAttributeValue += `${sanitiseImageSrc}?tr=w-${this.props.srcsetSizes[i].imageWidth} ${this.props.srcsetSizes[i].viewPortWidth}w`;

          if (this.props.srcsetSizes.length - 1 !== i) {
            srcSetAttributeValue += ", ";

      return (
          <VisibilitySensor key={sanitiseImageSrc} delayedCall={true} partialVisibility={true} once>
            {({isVisible}) =>
              {isVisible ? 
                <img src={`${sanitiseImageSrc}?tr=w-${this.props.widthPx}`} 
                      srcSet={srcSetAttributeValue} /> : 
                <img src={`${sanitiseImageSrc}?tr=w-${this.props.defaultWidthPx}`} 
                      alt={this.props.alt} />}

LazyloadImage.propTypes = {
  alt: PropTypes.string,
  defaultWidthPx: PropTypes.number,
  sizes: PropTypes.string,
  src: PropTypes.string,
  srcsetSizes: PropTypes.arrayOf(
      imageWidth: PropTypes.number,
      viewPortWidth: PropTypes.number
  widthPx: PropTypes.number

LazyloadImage.defaultProps = {
  alt: ``,
  defaultWidthPx: 50,
  sizes: `50vw`,
  src: ``,
  widthPx: 50

export default LazyloadImage

Component In Use

The example below shows the LazyloadImage component used to serve a logo that will serve a different sized image with the following widths - 400, 300 and 200.

                srcsetSizes={[{ imageWidth: 400, viewPortWidth: 992 }, { imageWidth: 300, viewPortWidth: 768 }, { imageWidth: 200, viewPortWidth: 500 }]}
                alt="Surinder Logo" />

Useful Links

Journey To GatsbyJS: Beta Site Release v2

It’s taken me a little longer to make more progress as I’ve been stumped on how I would go about listing blog posts filtered by year and/or month. I’ve put extra effort in ensuring the full date is included in the URL for all my blog posts. In the process of doing this, I had to review and refactor the functions used within gatsby-node.js.


I noticed that I was carrying out build operations inefficiently and in some cases where they didn’t need to happen. For example, I was building individual blog post pages all over the place thinking I was required to do this in areas where I was listing blog posts. Reviewing my build operations had a positive impact and managed to reduce build times to Netlify from 2 minutes 17 seconds to 2 minutes 3 seconds. Where you are able to make build time savings, why wouldn’t you want to do this? By being efficient, you could squeeze in more builds within Netlify’s 300-minute monthly limit (based on free-tier).

Page Speed Tests

The GatsyJS build is at a point where I can start carrying out some performance tests using Google Page Insights and Lighthouse. Overall, the tests have proved more favourable when compared against my current site. The Lighthouse analysis still proves there is work to be done, however, the static-site generator architecture sets you off to a good start with minimal effort.

Google Lighthouse Stats - Current Site Current site

Google Lighthouse Stats - Gatsby Site Gatsby site

Current HTML/CSS Quality

I can see the main area of failure is the HTML and CSS build... not my strong suit. The template has inherited performance-lag remnants from my current site and even though I have cleaned it up as well as I can, it’s not ideal. At this moment, I have to focus on function over form.

Site Release Details

This version contains the following:

  • Blog post-filtering by year and/or month. For example:
    • /Blog/2019
    • /Blog/2019/12
  • Refactored build functions.
  • Removed unneeded CSS from the old template (still got more to do).

GatsbyJS Beta Site:

GatsbyJS: Automatically Include Date In Blog Post Slug

There will be times where you will want to customise the slug based on fields from your markdown file. In my case, I wanted all my blog post URL's in the following format: /Blog/yyyy/MM/dd/Blog-Post-Title. There are two ways of doing this:

  1. Enter the full slug using a “slug” field within your markdown file.
  2. Use the onCreateNode() function found in the gatsby-node.js file to dynamically generate the slug.

My preference would be option 2 as it gives us the flexibility to modify the slug structure in one place when required. If for some reason we had to update our slug structure at a later date, it would be very time consuming (depending on how many markdown files you have) to update the slug field within each markdown file if we went ahead with option 1.

This post is suited for those who are storing their content using markdown files. I don’t think you will get much benefit if your Gatsby site is linked to a headless CMS, as the slugs are automatically generated within the platform.

The onCreateNode() Function

This function is called whenever a node is created or updated, which makes it the most ideal place to add the functionality we want to perform. It is found in the gatsby-node.js file

What we need to do is retrieve the fields we would like to form part of our slug by accessing the nodes frontmatter. In our case, all we require is two fields:

  1. Post Date
  2. Slug
exports.onCreateNode = ({ node, actions, getNode }) => {
    const { createNodeField } = actions
    if (node.internal.type === `MarkdownRemark`) {
      const relativeFilePath = createFilePath({ node, getNode, trailingSlash: false });
      const postDate = moment(; // Use moment.js to easily change date format.
      const url = `/Blog/${postDate.format("YYYY/MM/DD")}${node.frontmatter.slug}`;

        name: `slug`,
        value: url,

After making this change, you will need to re-run the gatsby develop command.

Journey To GatsbyJS: Beta Site Release v1

I am surprised at just how much progress I have made in replicating my site using the GatsbyJS framework. I have roughly spent around 10-12 days (not full days) getting up to speed on everything GatsbyJS and transitioning what I have learnt over to the GatsbyJS version of my site.

Initially, my progress was slow as I had to get my head around GraphQL, the process of how static pages are generated in the hierarchy I require and export my existing blog content to markdown. Having previous experience in React has definitely helped in making relatively swift progress.

What I would say to new GatsbyJS developers is to use the Gatsby Starter Default package - if you really want to understand Gatsby in its entirety. The package gives you enough functionality to understand what’s going on so you can easily make your own customisations. Using other fully functional starter packages can cause confusion and led me asking more questions when attempting to make changes. Trust me, it’s not wise to get too ahead of yourself (as admirable as that might be) in the early stages. Start simple and work your way up!

The interesting thing I noticed whilst working with GatsbyJS is when I think I am stumped from a functionality point-of-view, I find there is a plugin that does exactly what I require. GatsbyJS offers a foray of quality plugins. For example, I had issues in ordering my "preconnect" declarations within the <head> block so they resided before any styles or scripts. It seemed GatsbyJS has its own way of ordering the <head> elements. Thankfully, like always, there’s a plugin on hand to cure my woes.

Site Release Details

As of today, I have released the first version of my GatsbyJS site to Netlify. It’s by no means perfect and will be a work-in-progress for many iterations to come.

This version contains the following:

  • Implemented styling from the current site. Still rough around the edges and in no way efficiently done.
  • All images are hosted via to be served efficiently via CDN with responsive capability.
  • Added custom routing for blog posts to include the date. For example, “/Blog/2020/01/01/My-Blog-Post”.
  • Posts can be filtered by Category (unstyled).
  • Posts Archive page (unstyled)
  • Implemented pagination for blog listing.
  • Added the following plugins:

Making my first publish to Netlify was completed in: 2 minutes 17 seconds. From an efficiency standpoint, I don’t know if this is good or bad. For me, 2 minutes seems like a long time. I wonder if it could be due to the 250+ markdown files I’m using for my blog posts and the multiple filtering routes. It’s also worth noting, I’m going completely static by not relying on any content management platform.

GatsbyJS Beta Site:

Year In Review - 2019

I am glad to report that this year was a year of new learning. Not just about things from a technical standpoint but from a personal standpoint. I feel I started the year with a single-track mindset. However, as the year progressed I have become open to new ways of thinking and finally accepting that even though certain personal milestones I set for myself may not have been accomplished, I am content on lessons learnt from failure. Failure may suck, but it’s progression! It also gives me something to write about. :-)

2019 in Words/Phrases

Kentico 12 MVC, Umbraco, GatsbyJs, Azure Dev Ops, Maldives, Hiking, Drupal (yes I had to do that along with a bit of PHP), Cloudflare CDN Configuration, Google Lighthouse score, Headless CMS - strategic asset, Prismic, Netlify, Kontent, CaaS (Content-as-a-Service), Automated backups for personal hosting, iPad for improved productivity, A2 Hosting Issues, Writers block, New desk and office, Failing Macbook Pro battery, Considering an iPhone 11, Fondness of Port.

Site Offline and Lessons Learnt

I was welcomed with the first bit of failure in April where my website was taken offline (along with many others) for a lengthy period thanks to my previous hosting provider, A2 Hosting. They had no backups, no disaster recovery and no customer support. Their whole operation is a disaster.

Failure = Lesson learnt.

The only benefit of this experience was that it led me to a chain of events to reassess how I host my site and come to the realisation just how important my online presence is to me. Luckily, I was able to get back up and running by moving hosting provider. Thank god I had a recent enough backup to do this.

Popular Posts of The Year

This year I have written 26 posts (including this one). I've delved into my Google Analytics and picked a handful of gems where I believe in my own small way have made an impact:

I think my crowning glory is Google classing my post about “Switching Branches in TortoiseGit" as a featured snippet. So if anyone searches for something along those search terms, they will see the following as a top post. I don't know how long this will last, but I'll take it!

Google Featured Snippet - Switch Branches In TortoiseGit


My site statistics have increased considerably, which has been amazing. However, I have to remain realistic and grounded in what to expect in future comparisons. I think the figures may plateau over the next year.

The stats I post below is based on organic searches and I haven’t actively posted links on my social. Maybe this is something I should get back into doing for further exposure.

2018/2019 Comparison:

  • Users: +50%
  • Page Views: +47%
  • New Users: +48%
  • Bounce Rate: +0.8%
  • Search Console Total Clicks: +251%
  • Search Console Impressions: +280%
  • Search Console Page Position: -15%


I am so close to hitting the all-time milestone for the length of service when compared to any company I’ve worked in previously. In fact, I have already surpassed any previous record three-fold. As of next July, it will be 10 years! Wowsers!

I can see the coming year will be a time to reassess how we approach our technical projects to accommodate new markets, technologies and frameworks. It’s always an exciting time to be a developer at Syndicut, but I am looking forward to sinking my teeth into new challenges ahead!

Greater Emphasis on CaaS (Content-as-a-Service)

Over the last year, I have noticed a shift in how content is managed. Even though I have been busy working away on headless CMS’s at Syndicut over the last few years, it seems to be the year where its properly been given global traction and market awareness. You can just tell by the number of events for both developers and clients.

Through this exposure, clients are becoming technically savvy and questioning how and where their data is housed. Content is a strategic asset that should no longer be siloed, but distributed across multiple mediums, for example:

  • Website
  • Mobile Applications
  • Digital Billboards

The key to a successful Headless CMS integration is not the development of an application, but the content-modelling. Based on what I have seen from other implementations, sufficient content-modelling always seems to be missed. Data-architecture is key to ensure data is scalable across all mediums.

I am also a Kontent (previously known as Kentico Cloud) Certified Developer.

iPad and Now iPhone???

This subject matter truly alarms me.

I’ve been considering getting an iPhone 11 after Google released their dismal spec of the Pixel 4 and on top of that, finding that I am really happy with my iPad Air purchase. This is coming from an Android fan!

I have no regrets in getting an iPad, especially when combined with the a keyboard and Apple Pen. It makes you a productivity powerhouse! We live in a world where finding quality Android tablets with sufficient accessories is difficult to find.

If I can get my head around being locked into the Apple eco-system, I might make the move. Why oh why is Google putting me in such a position. :-S

I guess we’ll have to wait till I write my “Year In Review - 2020” on what I ended up doing.

Coffee Tables and Desk Purchased!

In my last year in review, I jokingly added a footnote stating I needed to get a coffee table set and desk. I can mark a massive tick against these two items for a job well done. In fact, I went a step further with purchasing a desk by converting a part of a room into a small office with the following additions:

  • Ikea desk chair
  • Corner shelves
  • An array of potted plants
  • Laptop stand
  • Very cool desk lamp
  • Nice grey rug with some pleasant subtle abstract patterns

It’s now a perfect place where I can work and write without any distractions. The room still requires some finishing touches - in my case, it’s always the small jobs that take the longest!

I was surprised at how productive I’ve been by finally having a small office setup. Gone are the days where I would be reclined on my sofa in front of the TV working away on my laptop.

Redeveloping My Site

It seems like I can’t go through a year without looking into redeveloping my site. It’s the curse when being exposed to working with new technologies and platforms. I like to ensure I am moving with the times too.

I have been considering ditching Kentico as my content-management platform and opting for the static-generator route, such as Gatsby. Resulting in simplified platform-agnostic hosting, site architecture and with the added benefit of portability. I am in the middle of replicating my site functionality using Gatsby to see if it’s a feasible option.

I will be posting links to my “in progress” site hosted on Netlify in my “Journey To GatsbyJs” Series, where I will be writing about things I’ve learnt trying to replicate my site functionality.

Journey To GatsbyJS: Exporting Kentico Blog Posts To Markdown Files

The first thing that came into my head when testing the waters to start the process of moving over to Gatsby was my blog post content. If I could get my content in a form a Gatsby site accepts then that's half the battle won right there, the theory being it will simplify the build process.

I opted to go down the local storage route where Gatsby would serve markdown files for my blog post content. Everything else such as the homepage, archive, about and contact pages can be static. I am hoping this isn’t something I will live to regret but I like the idea my content being nicely preserved in source control where I have full ownership without relying on a third-party platform.

My site is currently built on the .NET framework using Kentico CMS. Exporting data is relatively straight-forward, but as I transition to a somewhat content-less managed approach, I need to ensure all fields used within my blog posts are transformed appropriately into the core building blocks of my markdown files.

A markdown file can carry additional field information about my post that can be declared at the start of the file, wrapped by triple dashes at the start and end of the block. This is called frontmatter.

Here is a snippet of one of my blog posts exported to a markdown file:

title: "Maldives and Vilamendhoo Island Resort"
summary: "At Vilamendhoo Island Resort you are surrounded by serene beauty wherever you look. Judging by the serendipitous chain of events where the stars aligned, going to the Maldives has been a long time in the coming - I just didn’t know it."
date: "2019-09-21T14:51:37Z"
draft: false
slug: "/Maldives-and-Vilamendhoo-Island-Resort"
disqusId: "b08afeae-a825-446f-b448-8a9cae16f37a"
teaserImage: "/media/Blog/Travel/VilamendhooSunset.jpg"
socialImage: "/media/Blog/Travel/VilamendhooShoreline.jpg"
categories: ["Surinder's Log"]
tags: ["holiday", "maldives"]

Writing about my holiday has started to become a bit of a tradition (for those that are worthy of such time and effort!) which seem to start when I went to [Bali last year](/Blog/2018/07/06/My-Time-At-Melia-Bali-Hotel). 
I find it's a way to pass the time in airports and flights when making the return journey home. So here's another one...

Everything looks well structured and from the way I have formatted the date, category and tags fields, it will lend itself to be quite accommodating for the needs of future posts. I made the decision to keep the slug value void of any directory structure to give me the flexibility on dynamically creating a URL structure.

Kentico Blog Posts to Markdown Exporter

The quickest way to get the content out was to create a console app to carry out the following:

  1. Loop through all blog posts in post date descending.
  2. Update all images paths used as a teaser and within the content.
  3. Convert rich text into markdown.
  4. Construct frontmatter key-value fields.
  5. Output to a text file in the following naming convention: “”.

Tasks 2 and 3 will require the most effort…

When I first started using Kentico, all references to images were made directly via the file path and as I got more familiar with Kentico, this was changed to use permanent URLs. Using permanent URL’s caused the link to an image to change from "/Surinder/media/Surinder/myimage.jpg", to “/getmedia/27b68146-9f25-49c4-aced-ba378f33b4df /myimage.jpg?width=500”. I need to create additional checks to find these URL’s and transform into a new path.

Finding a good .NET markdown converter is imperative. Without this, there is a high chance the rich text content would not be translated to a satisfactorily standard, resulting in some form of manual intervention to carry out corrections. Combing through 250 posts manually isn’t my idea of fun! :-)

I found the ReverseMarkdown .NET library allowed for enough options to deal with Rich Text to Markdown conversion. I could set in the conversion process to ignore HTML that couldn’t be transformed thus preserving content.


using CMS.DataEngine;
using CMS.DocumentEngine;
using CMS.Helpers;
using CMS.MediaLibrary;
using Export.BlogPosts.Models;
using ReverseMarkdown;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Text;
using System.Text.RegularExpressions;

namespace Export.BlogPosts
    class Program
        public const string SiteName = "SurinderBhomra";
        public const string MarkdownFilesOutputPath = @"C:\Temp\BlogPosts\";
        public const string NewMediaBaseFolder = "/media";
        public const string CloudImageServiceUrl = "";

        static void Main(string[] args)

            List<BlogPost> blogPosts = GetBlogPosts();

            if (blogPosts.Any())
                foreach (BlogPost bp in blogPosts)
                    bool isMDFileGenerated = CreateMDFile(bp);

                    Console.WriteLine($"{bp.PostDate:yyyy-MM-dd} - {bp.Title} - {(isMDFileGenerated ? "EXPORTED" : "FAILED")}");


        /// <summary>
        /// Retrieve all blog posts from Kentico.
        /// </summary>
        /// <returns></returns>
        private static List<BlogPost> GetBlogPosts()
            List<BlogPost> posts = new List<BlogPost>();

            InfoDataSet<TreeNode> query = DocumentHelper.GetDocuments()
                                               .Path("/Blog", PathTypeEnum.Children)
                                               .OrderBy("BlogPostDate DESC")

            if (!DataHelper.DataSourceIsEmpty(query))
                foreach (TreeNode blogPost in query)
                    posts.Add(new BlogPost
                        Guid = blogPost.NodeGUID.ToString(),
                        Title = blogPost.GetStringValue("BlogPostTitle", string.Empty),
                        Summary = blogPost.GetStringValue("BlogPostSummary", string.Empty),
                        Body = RichTextToMarkdown(blogPost.GetStringValue("BlogPostBody", string.Empty)),
                        PostDate = blogPost.GetDateTimeValue("BlogPostDate", DateTime.MinValue),
                        Slug = blogPost.NodeAlias,
                        DisqusId = blogPost.NodeGUID.ToString(),
                        Categories = blogPost.Categories.DisplayNames.Select(c => c.Value.ToString()).ToList(),
                        Tags = blogPost.DocumentTags.Replace("\"", string.Empty).Split(',').Select(t => t.Trim(' ')).Where(t => !string.IsNullOrEmpty(t)).ToList(),
                        SocialImage = GetMediaFilePath(blogPost.GetStringValue("ShareImageUrl", string.Empty)),
                        TeaserImage = GetMediaFilePath(blogPost.GetStringValue("BlogPostTeaser", string.Empty))

            return posts;

        /// <summary>
        /// Creates the markdown content based on Blog Post data.
        /// </summary>
        /// <param name="bp"></param>
        /// <returns></returns>
        private static string GenerateMDContent(BlogPost bp)
            StringBuilder mdBuilder = new StringBuilder();

            #region Post Attributes

            mdBuilder.Append($"title: \"{bp.Title.Replace("\"", "\\\"")}\"{Environment.NewLine}");
            mdBuilder.Append($"summary: \"{HTMLHelper.HTMLDecode(bp.Summary).Replace("\"", "\\\"")}\"{Environment.NewLine}");
            mdBuilder.Append($"date: \"{bp.PostDate.ToString("yyyy-MM-ddTHH:mm:ssZ")}\"{Environment.NewLine}");
            mdBuilder.Append($"draft: {bp.IsDraft.ToString().ToLower()}{Environment.NewLine}");
            mdBuilder.Append($"slug: \"/{bp.Slug}\"{Environment.NewLine}");
            mdBuilder.Append($"disqusId: \"{bp.DisqusId}\"{Environment.NewLine}");
            mdBuilder.Append($"teaserImage: \"{bp.TeaserImage}\"{Environment.NewLine}");
            mdBuilder.Append($"socialImage: \"{bp.SocialImage}\"{Environment.NewLine}");

            #region Categories

            if (bp.Categories?.Count > 0)
                CommaDelimitedStringCollection categoriesCommaDelimited = new CommaDelimitedStringCollection();

                foreach (string categoryName in bp.Categories)

                mdBuilder.Append($"categories: [{categoriesCommaDelimited.ToString()}]{Environment.NewLine}");


            #region Tags

            if (bp.Tags?.Count > 0)
                CommaDelimitedStringCollection tagsCommaDelimited = new CommaDelimitedStringCollection();

                foreach (string tagName in bp.Tags)

                mdBuilder.Append($"tags: [{tagsCommaDelimited.ToString()}]{Environment.NewLine}");




            // Add blog post body content.

            return mdBuilder.ToString();

        /// <summary>
        /// Creates files with a .md extension.
        /// </summary>
        /// <param name="bp"></param>
        /// <returns></returns>
        private static bool CreateMDFile(BlogPost bp)
            string markdownContents = GenerateMDContent(bp);

            if (string.IsNullOrEmpty(markdownContents))
                return false;

            string fileName = $"{bp.PostDate:yyyy-MM-dd}---{bp.Slug}.md";
            File.WriteAllText($@"{MarkdownFilesOutputPath}{fileName}", markdownContents);

            if (File.Exists($@"{MarkdownFilesOutputPath}{fileName}"))
                return true;

            return false;

        /// <summary>
        /// Gets the full relative path of an file based on its Permanent URL ID. 
        /// </summary>
        /// <param name="filePath"></param>
        /// <returns></returns>
        private static string GetMediaFilePath(string filePath)
            if (filePath.Contains("getmedia"))
                // Get GUID from file path.
                Match regexFileMatch = Regex.Match(filePath, @"(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}");

                if (regexFileMatch.Success)
                    MediaFileInfo mediaFile = MediaFileInfoProvider.GetMediaFileInfo(Guid.Parse(regexFileMatch.Value), SiteName);

                    if (mediaFile != null)
                        return $"{NewMediaBaseFolder}/{mediaFile.FilePath}";

            // Return the file path and remove the base file path.
            return filePath.Replace("/SurinderBhomra/media/Surinder", NewMediaBaseFolder);

        /// <summary>
        /// Convert parsed rich text value to markdown.
        /// </summary>
        /// <param name="richText"></param>
        /// <returns></returns>
        public static string RichTextToMarkdown(string richText)
            if (!string.IsNullOrEmpty(richText))
                #region Loop through all images and correct the path

                // Clean up tilda's.
                richText = richText.Replace("~/", "/");

                #region Transform Image Url's Using Width Parameter

                Regex regexFileUrlWidth = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})(\?width=([0-9]*))", RegexOptions.Multiline | RegexOptions.IgnoreCase);

                foreach (Match fileUrl in regexFileUrlWidth.Matches(richText))
                    string width = fileUrl.Groups[4] != null ? fileUrl.Groups[4].Value : string.Empty;
                    string newMediaUrl = $"{CloudImageServiceUrl}/width/{width}/n/{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";

                    if (newMediaUrl != string.Empty)
                        richText = richText.Replace(fileUrl.Value, newMediaUrl);


                #region Transform Generic File Url's

                Regex regexGenericFileUrl = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})", RegexOptions.Multiline | RegexOptions.IgnoreCase);

                foreach (Match fileUrl in regexGenericFileUrl.Matches(richText))
                    // Construct media URL required by image hosting company - CloudImage. 
                    string newMediaUrl = $"{CloudImageServiceUrl}/cdno/n/n/{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";

                    if (newMediaUrl != string.Empty)
                        richText = richText.Replace(fileUrl.Value, newMediaUrl);



                Config config = new Config
                    UnknownTags = Config.UnknownTagsOption.PassThrough, // Include the unknown tag completely in the result (default as well)
                    GithubFlavored = true, // generate GitHub flavoured markdown, supported for BR, PRE and table tags
                    RemoveComments = true, // will ignore all comments
                    SmartHrefHandling = true // remove markdown output for links where appropriate

                Converter markdownConverter = new Converter(config);

                return markdownConverter.Convert(richText).Replace(@"[!\", @"[!").Replace(@"\]", @"]");

            return string.Empty;

        /// <summary>
        /// Returns media url without query string values.
        /// </summary>
        /// <param name="mediaUrl"></param>
        /// <returns></returns>
        private static string ClearQueryStrings(string mediaUrl)
            if (mediaUrl == null)
                return string.Empty;

            if (mediaUrl.Contains("?"))
                mediaUrl = mediaUrl.Split('?').ToList()[0];

            return mediaUrl.Replace("~", string.Empty);

There is a lot going on here, so let's do a quick breakdown:

  1. GetBlogPosts(): Get all blog posts from Kentico and parse them to a “BlogPost” class object containing all the fields we want to export.
  2. GetMediaFilePath(): Take the image path and carry out all the transformation required to change to a new file path. This method is used in GetBlogPosts() and RichTextToMarkdown() methods.
  3. RichTextToMarkdown(): Takes rich text and goes through a transformation process to relink images in a format that will be accepted by my image hosting provider - Cloud Image. In addition, this is where ReverseMarkdown is used to finally convert to markdown.
  4. CreateMDFile(): Creates the .md file based on the blog posts found in Kentico.

Delving Into The World of Gatsby and Static Site Generators

I have been garnering interest in a static-site generator architecture ever since I read Paul Stamatiou’s enlightening post about how he built his website. I am always intrigued to know what goes on behind the scenes of someone's website, especially bloggers and the technology stack they use.

Paul built his website using Jekyll. In his post, he explains his reasoning to why he decided to go down this particular avenue - with great surprise resonated with me. In the past, I always felt the static-site generator architecture was too restrictive and coming from a .NET background, I felt comfortable knowing my website was built using some form of server-side code connected to a database, allowing me infinite possibilities. Building a static site just seemed like a backwards approach to me. Paul’s opening few paragraphs changed my perception:

..having my website use a static site generator for a few reasons...I did not like dealing with a dynamic website that relied on a typical LAMP stack. Having a database meant that MySQL database backups was mission critical.. and testing them too. Losing an entire blog because of a corrupt database is no fun...

...I plan to keep my site online for decades to come. Keeping my articles in static files makes that easy. And if I ever want to move to another static site generator, porting the files over to another templating system won't be as much of a headache as dealing with a database migration.

And then it hit me. It all made perfect sense!

Enter The Static Site Generator Platform

I’ll admit, I’ve come late to the static site party and never gave it enough thought, so I decided to pick up the slack and researched different static-site generator frameworks, including:

  • Jekyll
  • Hugo
  • Gatsby

Jekyll runs on the Ruby language, Hugo on Go (invented by Google) and Gatsby on React. After some tinkering with each, I opted to invest my time in learning Gatsby. I was very tempted by Hugo, (even if it meant learning Go) as it is more stable and requires less build time which is important to consider for larger websites, but it fundamentally lacks an extensive plugin ecosystem.

Static Generator of Choice: Gatsby

Gatsby comes across as a mature platform offering a wide variety of useful plugins and tools to enhance the application build. I’m already familiar coding in React from when I did some React Native work in the past, which I haven’t had much chance to use again. Being built on React, it gave me an opportunity to dust the cobwebs off and improve both my React and (in the process) JavaScript skillset.

I was surprised by just how quickly I managed to get up and running. There is nothing you have to configure unlike when working with content-management platforms. In fact, I decided to create a Gatsby version of this very site. Within a matter of days, I was able to replicate the following website functionality:

  • Listing blog posts.
  • Pagination.
  • Filtering by category and tag.
  • SEO - managing page titles, description, open-graph tags, etc.

There I such a wealth of information and support online to help you along.

I am very tempted to move over to Gatsby.

When to use Static or Dynamic?

Static site generators isn’t a framework that is suited for all web application scenarios. It’s more suited for small/medium-sized sites where there isn't a requirement for complex integrations. It works best with static content that doesn’t require changes to occur based on user interaction.

The only thing that comes into question is the build time where you have pages of content in their thousands. Take Gatsby, for example...

I read one site containing around 6000 posts, resulting in a build time of 3 minutes. The build time can vary based on the environment Gatsby is running on and build quality. I personally try to ensure best case build time by:

  • Sufficiently spec'd hardware is used - laptop and hosting environment.
  • Keeping the application lean by utilising minimal plugins.
  • Write efficient JavaScript.
  • Reusing similar GraphQL queries where the same data is being requested more than once in different components, pages and views.

We have to accept the more pages a website has, the slower the build time will be. Hugo should get an honourable mention here as the build speed beats its competition hands down.

Static sites have their place in any project as long as you conform within the confines of the framework. If you have a feeling that your next project will at some point (or immediately) require some form of fanciful integration, dynamic is the way to go. Dynamic gives you unlimited possibilities and will always be the safer option, something static will never measure against.

The main strengths of static sites are that they’re secure and perform well in Lighthouse scoring potentially resulting favourably in search engines.

Avenue’s for Adding Content

The very cool thing is you have the ability to hook up to your content via two options:

  1. Markdown files
  2. Headless CMS

Markdown is such a pleasant and efficient way to write content. It’s all just plain text written with the help of a simplified notation that is then transformed into HTML. The crucial benefit of writing in markdown is its portability and clean output. If in the future I choose to jump to a different static framework, it’s just a copy and paste job.

A more acceptable client solution is to integrate with a Headless CMS where a more familiar Rich Text content editing and the storage of media is available to hand.

You can also create custom-built pages without having to worry about the data layer, for example, landing pages.

Final Thoughts

I love Gatsby and it’s been a very long time since I have been excited by a different approach to developing websites. I am very tempted to make the move as this framework is made for sites like mine, providing I can get solutions to areas in Gatsby where I currently lack knowledge, such as:

  • Making URL’s case-insensitive.
  • 301 redirects.
  • Serving different responsive images within the post content. I understand Gatsby does this at templating-level but cannot currently see a suitable approach for media housed inside content.

I’m sure the above points are achievable and as I have made quite swift progress on replicating my site in Gatsby, if all goes to plan, I could go the full hog. Meaning I don’t plan on serving content from any form of content-management system and cementing myself in Gatsby.

At one point I was planning on moving over to a headless CMS, such as Kontent or Prismic. That plan was swiftly scrapped when there didn’t seem to be an avenue of migrating my existing content unless a Business or Professional plan is purchased, which came to a high cost.

I will be documenting my progress in follow up posts. So watch this space!

WebMarkupMin: Configuring Minification Level

When WebMarkupMin is first added to a web project, by default the minification is set very high and found that it caused my pages not to be considered valid HTML and worse, things looking slightly broken.

WebMinMarkup minified things that I didn’t even think required minification and the following things got stripped out of the page:

  • End HTML tags.
  • Quotes.
  • Protocols from attributes.
  • Form input type attribute.

The good thing is, the level of minification can be controlled by creating a configuration file inside the App_Start directory of your MVC project. I thought it was be useful to post a copy of my WebMinMarkup configuration file for reference when working on future MVC projects and might also prove useful for others as well.

public class WebMarkupMinConfig
    public static void Configure(WebMarkupMinConfiguration configuration)
        configuration.AllowMinificationInDebugMode = false;
        configuration.AllowCompressionInDebugMode = false;
        configuration.DisablePoweredByHttpHeaders = true;

        DefaultLogger.Current = new ThrowExceptionLogger();

        IHtmlMinificationManager htmlMinificationManager = HtmlMinificationManager.Current;
        HtmlMinificationSettings htmlMinificationSettings = htmlMinificationManager.MinificationSettings;
        htmlMinificationSettings.RemoveRedundantAttributes = true;
        htmlMinificationSettings.RemoveHttpProtocolFromAttributes = false;
        htmlMinificationSettings.RemoveHttpsProtocolFromAttributes = false;
        htmlMinificationSettings.AttributeQuotesRemovalMode = HtmlAttributeQuotesRemovalMode.KeepQuotes;
        htmlMinificationSettings.RemoveOptionalEndTags = false;
        htmlMinificationSettings.RemoveEmptyAttributes = false;
        htmlMinificationSettings.PreservableAttributeList = "input[type]";

        IXhtmlMinificationManager xhtmlMinificationManager = XhtmlMinificationManager.Current;
        XhtmlMinificationSettings xhtmlMinificationSettings = xhtmlMinificationManager.MinificationSettings;
        xhtmlMinificationSettings.RemoveRedundantAttributes = true;
        xhtmlMinificationSettings.RemoveHttpProtocolFromAttributes = false;
        xhtmlMinificationSettings.RemoveHttpsProtocolFromAttributes = false;
        xhtmlMinificationSettings.RemoveEmptyAttributes = false;

        IXmlMinificationManager xmlMinificationManager = XmlMinificationManager.Current;
        XmlMinificationSettings xmlMinificationSettings = xmlMinificationManager.MinificationSettings;
        xmlMinificationSettings.CollapseTagsWithoutContent = true;

        IHttpCompressionManager httpCompressionManager = HttpCompressionManager.Current;
        httpCompressionManager.CompressorFactories = new List<ICompressorFactory>
            new DeflateCompressorFactory(),
            new GZipCompressorFactory()

Once the configuration file is added to your project, the last thing you need to do is add a reference in the Global.asax file.

protected void Application_Start()
    // Compression.

Thoughts On Moving To A Headless CMS

I’ll get right to it. Should I be making the move to a headless content management platform? I am no stranger to the Headless CMS sector after the many years of being involved in using different providers for client-based projects, so I am well-versed on the technology to make a judgement. But any form of judgment gets thrown out the window when making a consideration from a personal perspective.

Making the move to a Headless CMS is something I’ve been thinking for quite some time now as it would streamline my website development considerably. I can see my web application build footprint being smaller compared to how it is at the moment by running on Kentico 12.

This website has been running on Kentico CMS for around 6 years ever since I was first introduced to the Kentico platform, which gave me a very good reason to move from BlogEngine. I wanted my web presence to be more than just a blog that would give me the flexibility to be something more. I do not like the idea of being restricted to just one feature-base.

As great as it is running my website on Kentico CMS, it’s too big of an application for my needs. Afterall, I am just using the content-management functionality and none of the other great features the platform offers, so it’s good time to start thinking of downsizing and reduce running costs. Headless seems the most suitable option right?

I won’t be going into detail on what headless is. The internet contains information on the subject matter detailed in a more digestable manner over the years suitable for varied levels of technical expertise. “Headless CMS” is the industry buzz-word that clients are aware of. You can also take a read of a Medium post I wrote last year about one type of headless platform - Kentico Cloud (now named Kontent) and the market.

So why haven’t I made the move to Headless CMS? I think it comes down to following factors:

  • Pricing
  • Infrastructure and stability
  • Platform changes
  • Trust


First and foremost, it’s the price. I am aware that all Headless CMS providers have a free or starter tier, each with their own defined limitations whether that be the number of API requests or content limits. I like to look into the future and see where my online presence may take me and at some point, I would need to consider the cost of a paid tier. How does that fit into my yearly hosting costs?

At the moment, I am paying £80 yearly. If I were to jump onto headless, the cheapest price I’ve seen equates to £66 a year and I haven’t factored in hosting costs yet. I could get away with low-cost hosting as my web build will be on a smaller scale and I plan my next build using the .NET Core framework.

If I had my own company or product where I was looking for ways to deliver content across multiple channels, I would use headless in a heartbeat. I could justify the cost as I know I would be getting my money’s worth and if I were to find myself exceeding a tiers limit I could just move onto the next.

Infrastructure and Stability

Infrastructure and stability of a headless service all come down to how much you’re willing to pay. The API uptime is the most important part after the platform features. I’ve noticed that some starter and free tiers do not state an uptime, for example, 99.9% or 99.5%. Depending on the technology stack, this might not be an issue where a constant connection to the API is required, for example, Gatsby.

I do think in this area where Headless CMS wins, is the failover and backup procedures in place. They would more than likely surpass the infrastructure in place from a personally hosted and managed site.

Platform Changes

It’s natural for improvements and changes to be made throughout the lifespan of a product. The only thing with headless you don’t have a choice on whether you want those changes as what works for one person may not necessarily work for another. You are locked into the release cycle.

I remember back in the early days when Headless CMS’s started to gain traction, releases were being made in such a quick turnaround at the expense of the editors who had to quickly adapt to the subtle changes in features. The good thing now is the dust has settled as the platform has gotten to the point of maturity.

The one area I still have difficulty getting over is the rich-text area. Each headless CMS provider seems to have their restrictions and you never really get full control over HTML markup unless a normal text area is used. There are ways around this but some restrictions still do exist.

Where do you as an individual fit into this lifecycle? That’s the million-dollar question. However, there is one headless platform that is very involved with feedback from their users, Kentico Kontent, where all ideas are put under consideration, voted on and (if successful) submitted into the roadmap. I haven’t seen this approach offered by other Headless CMS platforms and maybe this is something they should also do.


There is a trust aspect to an external provider storing your content. Data is your most valuable asset. Is there any chance in the service being discontinued at some-point? If I am being totally honest to myself, I don’t think this is a valid point as long as the chosen platform has proven it’s worth and cemented itself over a lengthy period of time. Choose the big players (in no particular order), such as:

  • Kontent
  • Contentful
  • Prismic
  • DatoCMS
  • ButterCMS

There is also another aspect to trust that draws upon a further point I made in the previous section regarding platform changes. In the past, I’ve seen content features getting deprecated. This doesn’t break your current build, just causes you to rethink things when updating to the newer version API interface.


I think moving to a Headless CMS requires a bigger leap than I thought. I say this purely from a personal perspective. The main piece of work would be to carry out content modelling for pages, migrate all my site content and media into the new platform and apply page redirects. This is before I have made a start in developing the new site.

I will always be in two minds on whether I should use a Headless CMS. If I wasn’t such a control-freak when it comes to every aspect of my application and content, I think I could make the move. Maybe I just need to learn to let go.

Diagnosing SQL71562 Error When Deploying An Azure SQL Database

When using the “Deploy to Azure Database” option in Microsoft SQL Management Studio to move a database to Azure, you may sometimes come across the following error:

Error SQL71562: Error validating element [dbo].[f]: Synonym: [dbo].[f] has an unresolved reference to object [server].[database_name].[table_name]. External references are not supported when creating a package from this platform.

These type of errors are generated as you cannot setup a linked server in Azure and queries using four-part [server].[database].[schema].[table] references are not supported. I’ve come across a SQL71562 error in the past, but this one was different. Generally, the error details are a lot more helpful and relates to stored procedures or views where a table path contains the database name:

Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name]

Easy enough to resolve. The error I was getting this time threw me as it didn’t point me to any object in the database to where the conflict resides and would require me to look through all possible database objects. This would be easy enough to do manually on a small database, but not a large database consisting of over 50 stored procedures and 30 views. Thankfully, SQL to the rescue...

To search across all stored procedures and views, you can use the LIKE operator to search against the database’s offending system objects based on the details you can gather from the error message:

-- Stored Procedures
FROM sys.procedures
WHERE OBJECT_DEFINITION(object_id) LIKE '%[database_name]%'

-- Views
FROM sys.views
WHERE OBJECT_DEFINITION(object_id) LIKE '%[database_name]%'