Blog

Blogging on programming and life in general.

  • The first thing that came into my head when testing the waters to start the process of moving over to Gatsby was my blog post content. If I could get my content in a form a Gatsby site accepts then that's half the battle won right there, the theory being it will simplify the build process.

    I opted to go down the local storage route where Gatsby would serve markdown files for my blog post content. Everything else such as the homepage, archive, about and contact pages can be static. I am hoping this isn’t something I will live to regret but I like the idea my content being nicely preserved in source control where I have full ownership without relying on a third-party platform.

    My site is currently built on the .NET framework using Kentico CMS. Exporting data is relatively straight-forward, but as I transition to a somewhat content-less managed approach, I need to ensure all fields used within my blog posts are transformed appropriately into the core building blocks of my markdown files.

    A markdown file can carry additional field information about my post that can be declared at the start of the file, wrapped by triple dashes at the start and end of the block. This is called frontmatter.

    Here is a snippet of one of my blog posts exported to a markdown file:

    ---
    title: "Maldives and Vilamendhoo Island Resort"
    summary: "At Vilamendhoo Island Resort you are surrounded by serene beauty wherever you look. Judging by the serendipitous chain of events where the stars aligned, going to the Maldives has been a long time in the coming - I just didn’t know it."
    date: "2019-09-21T14:51:37Z"
    draft: false
    slug: "/Maldives-and-Vilamendhoo-Island-Resort"
    disqusId: "b08afeae-a825-446f-b448-8a9cae16f37a"
    teaserImage: "/media/Blog/Travel/VilamendhooSunset.jpg"
    socialImage: "/media/Blog/Travel/VilamendhooShoreline.jpg"
    categories: ["Surinders-Log"]
    tags: ["holiday", "maldives"]
    ---
    
    Writing about my holiday has started to become a bit of a tradition (for those that are worthy of such time and effort!) which seem to start when I went to [Bali last year](/Blog/2018/07/06/My-Time-At-Melia-Bali-Hotel). 
    I find it's a way to pass the time in airports and flights when making the return journey home. So here's another one...
    

    Everything looks well structured and from the way I have formatted the date, category and tags fields, it will lend itself to be quite accommodating for the needs of future posts. I made the decision to keep the slug value void of any directory structure to give me the flexibility on dynamically creating a URL structure.

    Kentico Blog Posts to Markdown Exporter

    The quickest way to get the content out was to create a console app to carry out the following:

    1. Loop through all blog posts in post date descending.
    2. Update all images paths used as a teaser and within the content.
    3. Convert rich text into markdown.
    4. Construct frontmatter key-value fields.
    5. Output to a text file in the following naming convention: “yyyy-MM-dd---Post-Title.md”.

    Tasks 2 and 3 will require the most effort…

    When I first started using Kentico, all references to images were made directly via the file path and as I got more familiar with Kentico, this was changed to use permanent URLs. Using permanent URL’s caused the link to an image to change from "/Surinder/media/Surinder/myimage.jpg", to “/getmedia/27b68146-9f25-49c4-aced-ba378f33b4df /myimage.jpg?width=500”. I need to create additional checks to find these URL’s and transform into a new path.

    Finding a good .NET markdown converter is imperative. Without this, there is a high chance the rich text content would not be translated to a satisfactorily standard, resulting in some form of manual intervention to carry out corrections. Combing through 250 posts manually isn’t my idea of fun! :-)

    I found the ReverseMarkdown .NET library allowed for enough options to deal with Rich Text to Markdown conversion. I could set in the conversion process to ignore HTML that couldn’t be transformed thus preserving content.

    Code

    using CMS.DataEngine;
    using CMS.DocumentEngine;
    using CMS.Helpers;
    using CMS.MediaLibrary;
    using Export.BlogPosts.Models;
    using ReverseMarkdown;
    using System;
    using System.Collections.Generic;
    using System.Configuration;
    using System.IO;
    using System.Linq;
    using System.Text;
    using System.Text.RegularExpressions;
    
    namespace Export.BlogPosts
    {
        class Program
        {
            public const string SiteName = "SurinderBhomra";
            public const string MarkdownFilesOutputPath = @"C:\Temp\BlogPosts\";
            public const string NewMediaBaseFolder = "/media";
            public const string CloudImageServiceUrl = "https://xxxx.cloudimg.io";
    
            static void Main(string[] args)
            {
                CMSApplication.Init();
    
                List<BlogPost> blogPosts = GetBlogPosts();
    
                if (blogPosts.Any())
                {
                    foreach (BlogPost bp in blogPosts)
                    {
                        bool isMDFileGenerated = CreateMDFile(bp);
    
                        Console.WriteLine($"{bp.PostDate:yyyy-MM-dd} - {bp.Title} - {(isMDFileGenerated ? "EXPORTED" : "FAILED")}");
                    }
    
                    Console.ReadLine();
                }
            }
    
            /// <summary>
            /// Retrieve all blog posts from Kentico.
            /// </summary>
            /// <returns></returns>
            private static List<BlogPost> GetBlogPosts()
            {
                List<BlogPost> posts = new List<BlogPost>();
    
                InfoDataSet<TreeNode> query = DocumentHelper.GetDocuments()
                                                   .OnSite(SiteName)
                                                   .Types("SurinderBhomra.BlogPost")
                                                   .Path("/Blog", PathTypeEnum.Children)
                                                   .Culture("en-GB")
                                                   .CombineWithDefaultCulture()
                                                   .NestingLevel(-1)
                                                   .Published()
                                                   .OrderBy("BlogPostDate DESC")
                                                   .TypedResult;
    
                if (!DataHelper.DataSourceIsEmpty(query))
                {
                    foreach (TreeNode blogPost in query)
                    {
                        posts.Add(new BlogPost
                        {
                            Guid = blogPost.NodeGUID.ToString(),
                            Title = blogPost.GetStringValue("BlogPostTitle", string.Empty),
                            Summary = blogPost.GetStringValue("BlogPostSummary", string.Empty),
                            Body = RichTextToMarkdown(blogPost.GetStringValue("BlogPostBody", string.Empty)),
                            PostDate = blogPost.GetDateTimeValue("BlogPostDate", DateTime.MinValue),
                            Slug = blogPost.NodeAlias,
                            DisqusId = blogPost.NodeGUID.ToString(),
                            Categories = blogPost.Categories.DisplayNames.Select(c => c.Value.ToString()).ToList(),
                            Tags = blogPost.DocumentTags.Replace("\"", string.Empty).Split(',').Select(t => t.Trim(' ')).Where(t => !string.IsNullOrEmpty(t)).ToList(),
                            SocialImage = GetMediaFilePath(blogPost.GetStringValue("ShareImageUrl", string.Empty)),
                            TeaserImage = GetMediaFilePath(blogPost.GetStringValue("BlogPostTeaser", string.Empty))
                        });
                    }
                }
    
                return posts;
            }
    
            /// <summary>
            /// Creates the markdown content based on Blog Post data.
            /// </summary>
            /// <param name="bp"></param>
            /// <returns></returns>
            private static string GenerateMDContent(BlogPost bp)
            {
                StringBuilder mdBuilder = new StringBuilder();
    
                #region Post Attributes
    
                mdBuilder.Append($"---{Environment.NewLine}");
                mdBuilder.Append($"title: \"{bp.Title.Replace("\"", "\\\"")}\"{Environment.NewLine}");
                mdBuilder.Append($"summary: \"{HTMLHelper.HTMLDecode(bp.Summary).Replace("\"", "\\\"")}\"{Environment.NewLine}");
                mdBuilder.Append($"date: \"{bp.PostDate.ToString("yyyy-MM-ddTHH:mm:ssZ")}\"{Environment.NewLine}");
                mdBuilder.Append($"draft: {bp.IsDraft.ToString().ToLower()}{Environment.NewLine}");
                mdBuilder.Append($"slug: \"/{bp.Slug}\"{Environment.NewLine}");
                mdBuilder.Append($"disqusId: \"{bp.DisqusId}\"{Environment.NewLine}");
                mdBuilder.Append($"teaserImage: \"{bp.TeaserImage}\"{Environment.NewLine}");
                mdBuilder.Append($"socialImage: \"{bp.SocialImage}\"{Environment.NewLine}");
    
                #region Categories
    
                if (bp.Categories?.Count > 0)
                {
                    CommaDelimitedStringCollection categoriesCommaDelimited = new CommaDelimitedStringCollection();
    
                    foreach (string categoryName in bp.Categories)
                        categoriesCommaDelimited.Add($"\"{categoryName}\"");
    
                    mdBuilder.Append($"categories: [{categoriesCommaDelimited.ToString()}]{Environment.NewLine}");
                }
    
                #endregion
    
                #region Tags
    
                if (bp.Tags?.Count > 0)
                {
                    CommaDelimitedStringCollection tagsCommaDelimited = new CommaDelimitedStringCollection();
    
                    foreach (string tagName in bp.Tags)
                        tagsCommaDelimited.Add($"\"{tagName}\"");
    
                    mdBuilder.Append($"tags: [{tagsCommaDelimited.ToString()}]{Environment.NewLine}");
                }
    
                #endregion
    
                mdBuilder.Append($"---{Environment.NewLine}{Environment.NewLine}");
    
                #endregion
    
                // Add blog post body content.
                mdBuilder.Append(bp.Body);
    
                return mdBuilder.ToString();
            }
    
            /// <summary>
            /// Creates files with a .md extension.
            /// </summary>
            /// <param name="bp"></param>
            /// <returns></returns>
            private static bool CreateMDFile(BlogPost bp)
            {
                string markdownContents = GenerateMDContent(bp);
    
                if (string.IsNullOrEmpty(markdownContents))
                    return false;
    
                string fileName = $"{bp.PostDate:yyyy-MM-dd}---{bp.Slug}.md";
                File.WriteAllText($@"{MarkdownFilesOutputPath}{fileName}", markdownContents);
    
                if (File.Exists($@"{MarkdownFilesOutputPath}{fileName}"))
                    return true;
    
                return false;
            }
    
            /// <summary>
            /// Gets the full relative path of an file based on its Permanent URL ID. 
            /// </summary>
            /// <param name="filePath"></param>
            /// <returns></returns>
            private static string GetMediaFilePath(string filePath)
            {
                if (filePath.Contains("getmedia"))
                {
                    // Get GUID from file path.
                    Match regexFileMatch = Regex.Match(filePath, @"(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}");
    
                    if (regexFileMatch.Success)
                    {
                        MediaFileInfo mediaFile = MediaFileInfoProvider.GetMediaFileInfo(Guid.Parse(regexFileMatch.Value), SiteName);
    
                        if (mediaFile != null)
                            return $"{NewMediaBaseFolder}/{mediaFile.FilePath}";
                    }
                }
    
                // Return the file path and remove the base file path.
                return filePath.Replace("/SurinderBhomra/media/Surinder", NewMediaBaseFolder);
            }
    
            /// <summary>
            /// Convert parsed rich text value to markdown.
            /// </summary>
            /// <param name="richText"></param>
            /// <returns></returns>
            public static string RichTextToMarkdown(string richText)
            {
                if (!string.IsNullOrEmpty(richText))
                {
                    #region Loop through all images and correct the path
    
                    // Clean up tilda's.
                    richText = richText.Replace("~/", "/");
    
                    #region Transform Image Url's Using Width Parameter
    
                    Regex regexFileUrlWidth = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})(\?width=([0-9]*))", RegexOptions.Multiline | RegexOptions.IgnoreCase);
    
                    foreach (Match fileUrl in regexFileUrlWidth.Matches(richText))
                    {
                        string width = fileUrl.Groups[4] != null ? fileUrl.Groups[4].Value : string.Empty;
                        string newMediaUrl = $"{CloudImageServiceUrl}/width/{width}/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";
    
                        if (newMediaUrl != string.Empty)
                            richText = richText.Replace(fileUrl.Value, newMediaUrl);
                    }
    
                    #endregion
    
                    #region Transform Generic File Url's
    
                    Regex regexGenericFileUrl = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})", RegexOptions.Multiline | RegexOptions.IgnoreCase);
    
                    foreach (Match fileUrl in regexGenericFileUrl.Matches(richText))
                    {
                        // Construct media URL required by image hosting company - CloudImage. 
                        string newMediaUrl = $"{CloudImageServiceUrl}/cdno/n/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";
    
                        if (newMediaUrl != string.Empty)
                            richText = richText.Replace(fileUrl.Value, newMediaUrl);
                    }
    
                    #endregion
    
                    #endregion
    
                    Config config = new Config
                    {
                        UnknownTags = Config.UnknownTagsOption.PassThrough, // Include the unknown tag completely in the result (default as well)
                        GithubFlavored = true, // generate GitHub flavoured markdown, supported for BR, PRE and table tags
                        RemoveComments = true, // will ignore all comments
                        SmartHrefHandling = true // remove markdown output for links where appropriate
                    };
    
                    Converter markdownConverter = new Converter(config);
    
                    return markdownConverter.Convert(richText).Replace(@"[!\", @"[!").Replace(@"\]", @"]");
                }
    
                return string.Empty;
            }
    
            /// <summary>
            /// Returns media url without query string values.
            /// </summary>
            /// <param name="mediaUrl"></param>
            /// <returns></returns>
            private static string ClearQueryStrings(string mediaUrl)
            {
                if (mediaUrl == null)
                    return string.Empty;
    
                if (mediaUrl.Contains("?"))
                    mediaUrl = mediaUrl.Split('?').ToList()[0];
    
                return mediaUrl.Replace("~", string.Empty);
            }
        }
    }
    

    There is a lot going on here, so let's do a quick breakdown:

    1. GetBlogPosts(): Get all blog posts from Kentico and parse them to a “BlogPost” class object containing all the fields we want to export.
    2. GetMediaFilePath(): Take the image path and carry out all the transformation required to change to a new file path. This method is used in GetBlogPosts() and RichTextToMarkdown() methods.
    3. RichTextToMarkdown(): Takes rich text and goes through a transformation process to relink images in a format that will be accepted by my image hosting provider - Cloud Image. In addition, this is where ReverseMarkdown is used to finally convert to markdown.
    4. CreateMDFile(): Creates the .md file based on the blog posts found in Kentico.
  • I have been garnering interest in a static-site generator architecture ever since I read Paul Stamatiou’s enlightening post about how he built his website. I am always intrigued to know what goes on behind the scenes of someone's website, especially bloggers and the technology stack they use.

    Paul built his website using Jekyll. In his post, he explains his reasoning to why he decided to go down this particular avenue - with great surprise resonated with me. In the past, I always felt the static-site generator architecture was too restrictive and coming from a .NET background, I felt comfortable knowing my website was built using some form of server-side code connected to a database, allowing me infinite possibilities. Building a static site just seemed like a backwards approach to me. Paul’s opening few paragraphs changed my perception:

    ..having my website use a static site generator for a few reasons...I did not like dealing with a dynamic website that relied on a typical LAMP stack. Having a database meant that MySQL database backups was mission critical.. and testing them too. Losing an entire blog because of a corrupt database is no fun...

    ...I plan to keep my site online for decades to come. Keeping my articles in static files makes that easy. And if I ever want to move to another static site generator, porting the files over to another templating system won't be as much of a headache as dealing with a database migration.

    And then it hit me. It all made perfect sense!

    Enter The Static Site Generator Platform

    I’ll admit, I’ve come late to the static site party and never gave it enough thought, so I decided to pick up the slack and researched different static-site generator frameworks, including:

    • Jekyll
    • Hugo
    • Gatsby

    Jekyll runs on the Ruby language, Hugo on Go (invented by Google) and Gatsby on React. After some tinkering with each, I opted to invest my time in learning Gatsby. I was very tempted by Hugo, (even if it meant learning Go) as it is more stable and requires less build time which is important to consider for larger websites, but it fundamentally lacks an extensive plugin ecosystem.

    Static Generator of Choice: Gatsby

    Gatsby comes across as a mature platform offering a wide variety of useful plugins and tools to enhance the application build. I’m already familiar coding in React from when I did some React Native work in the past, which I haven’t had much chance to use again. Being built on React, it gave me an opportunity to dust the cobwebs off and improve both my React and (in the process) JavaScript skillset.


    I was surprised by just how quickly I managed to get up and running. There is nothing you have to configure unlike when working with content-management platforms. In fact, I decided to create a Gatsby version of this very site. Within a matter of days, I was able to replicate the following website functionality:

    • Listing blog posts.
    • Pagination.
    • Filtering by category and tag.
    • SEO - managing page titles, description, open-graph tags, etc.

    There I such a wealth of information and support online to help you along.

    I am very tempted to move over to Gatsby.

    When to use Static or Dynamic?

    Static site generators isn’t a framework that is suited for all web application scenarios. It’s more suited for small/medium-sized sites where there isn't a requirement for complex integrations. It works best with static content that doesn’t require changes to occur based on user interaction.

    The only thing that comes into question is the build time where you have pages of content in their thousands. Take Gatsby, for example...

    I read one site containing around 6000 posts, resulting in a build time of 3 minutes. The build time can vary based on the environment Gatsby is running on and build quality. I personally try to ensure best case build time by:

    • Sufficiently spec'd hardware is used - laptop and hosting environment.
    • Keeping the application lean by utilising minimal plugins.
    • Write efficient JavaScript.
    • Reusing similar GraphQL queries where the same data is being requested more than once in different components, pages and views.

    We have to accept the more pages a website has, the slower the build time will be. Hugo should get an honourable mention here as the build speed beats its competition hands down.

    Static sites have their place in any project as long as you conform within the confines of the framework. If you have a feeling that your next project will at some point (or immediately) require some form of fanciful integration, dynamic is the way to go. Dynamic gives you unlimited possibilities and will always be the safer option, something static will never measure against.

    The main strengths of static sites are that they’re secure and perform well in Lighthouse scoring potentially resulting favourably in search engines.

    Avenue’s for Adding Content

    The very cool thing is you have the ability to hook up to your content via two options:

    1. Markdown files
    2. Headless CMS

    Markdown is such a pleasant and efficient way to write content. It’s all just plain text written with the help of a simplified notation that is then transformed into HTML. The crucial benefit of writing in markdown is its portability and clean output. If in the future I choose to jump to a different static framework, it’s just a copy and paste job.

    A more acceptable client solution is to integrate with a Headless CMS where a more familiar Rich Text content editing and the storage of media is available to hand.

    You can also create custom-built pages without having to worry about the data layer, for example, landing pages.

    Final Thoughts

    I love Gatsby and it’s been a very long time since I have been excited by a different approach to developing websites. I am very tempted to make the move as this framework is made for sites like mine, providing I can get solutions to areas in Gatsby where I currently lack knowledge, such as:

    • Making URL’s case-insensitive.
    • 301 redirects.
    • Serving different responsive images within the post content. I understand Gatsby does this at templating-level but cannot currently see a suitable approach for media housed inside content.

    I’m sure the above points are achievable and as I have made quite swift progress on replicating my site in Gatsby, if all goes to plan, I could go the full hog. Meaning I don’t plan on serving content from any form of content-management system and cementing myself in Gatsby.

    At one point I was planning on moving over to a headless CMS, such as Kontent or Prismic. That plan was swiftly scrapped when there didn’t seem to be an avenue of migrating my existing content unless a Business or Professional plan is purchased, which came to a high cost.

    I will be documenting my progress in follow up posts. So watch this space!

  • When WebMarkupMin is first added to a web project, by default the minification is set very high and found that it caused my pages not to be considered valid HTML and worse, things looking slightly broken.

    WebMinMarkup minified things that I didn’t even think required minification and the following things got stripped out of the page:

    • End HTML tags.
    • Quotes.
    • Protocols from attributes.
    • Form input type attribute.

    The good thing is, the level of minification can be controlled by creating a configuration file inside the App_Start directory of your MVC project. I thought it was be useful to post a copy of my WebMinMarkup configuration file for reference when working on future MVC projects and might also prove useful for others as well.

    public class WebMarkupMinConfig
    {
        public static void Configure(WebMarkupMinConfiguration configuration)
        {
            configuration.AllowMinificationInDebugMode = false;
            configuration.AllowCompressionInDebugMode = false;
            configuration.DisablePoweredByHttpHeaders = true;
    
            DefaultLogger.Current = new ThrowExceptionLogger();
    
            IHtmlMinificationManager htmlMinificationManager = HtmlMinificationManager.Current;
            HtmlMinificationSettings htmlMinificationSettings = htmlMinificationManager.MinificationSettings;
            htmlMinificationSettings.RemoveRedundantAttributes = true;
            htmlMinificationSettings.RemoveHttpProtocolFromAttributes = false;
            htmlMinificationSettings.RemoveHttpsProtocolFromAttributes = false;
            htmlMinificationSettings.AttributeQuotesRemovalMode = HtmlAttributeQuotesRemovalMode.KeepQuotes;
            htmlMinificationSettings.RemoveOptionalEndTags = false;
            htmlMinificationSettings.RemoveEmptyAttributes = false;
            htmlMinificationSettings.PreservableAttributeList = "input[type]";
    
            IXhtmlMinificationManager xhtmlMinificationManager = XhtmlMinificationManager.Current;
            XhtmlMinificationSettings xhtmlMinificationSettings = xhtmlMinificationManager.MinificationSettings;
            xhtmlMinificationSettings.RemoveRedundantAttributes = true;
            xhtmlMinificationSettings.RemoveHttpProtocolFromAttributes = false;
            xhtmlMinificationSettings.RemoveHttpsProtocolFromAttributes = false;
            xhtmlMinificationSettings.RemoveEmptyAttributes = false;
    
            IXmlMinificationManager xmlMinificationManager = XmlMinificationManager.Current;
            XmlMinificationSettings xmlMinificationSettings = xmlMinificationManager.MinificationSettings;
            xmlMinificationSettings.CollapseTagsWithoutContent = true;
    
            IHttpCompressionManager httpCompressionManager = HttpCompressionManager.Current;
            httpCompressionManager.CompressorFactories = new List<ICompressorFactory>
            {
                new DeflateCompressorFactory(),
                new GZipCompressorFactory()
            };
        }
    }
    

    Once the configuration file is added to your project, the last thing you need to do is add a reference in the Global.asax file.

    protected void Application_Start()
    {
        // Compression.
        WebMarkupMinConfig.Configure(WebMarkupMinConfiguration.Instance);
    }
    
  • I’ll get right to it. Should I be making the move to a headless content management platform? I am no stranger to the Headless CMS sector after the many years of being involved in using different providers for client-based projects, so I am well-versed on the technology to make a judgement. But any form of judgment gets thrown out the window when making a consideration from a personal perspective.

    Making the move to a Headless CMS is something I’ve been thinking for quite some time now as it would streamline my website development considerably. I can see my web application build footprint being smaller compared to how it is at the moment by running on Kentico 12.

    This website has been running on Kentico CMS for around 6 years ever since I was first introduced to the Kentico platform, which gave me a very good reason to move from BlogEngine. I wanted my web presence to be more than just a blog that would give me the flexibility to be something more. I do not like the idea of being restricted to just one feature-base.

    As great as it is running my website on Kentico CMS, it’s too big of an application for my needs. Afterall, I am just using the content-management functionality and none of the other great features the platform offers, so it’s good time to start thinking of downsizing and reduce running costs. Headless seems the most suitable option right?

    I won’t be going into detail on what headless is. The internet contains information on the subject matter detailed in a more digestable manner over the years suitable for varied levels of technical expertise. “Headless CMS” is the industry buzz-word that clients are aware of. You can also take a read of a Medium post I wrote last year about one type of headless platform - Kentico Cloud (now named Kontent) and the market.

    So why haven’t I made the move to Headless CMS? I think it comes down to following factors:

    • Pricing
    • Infrastructure and stability
    • Platform changes
    • Trust

    Pricing

    First and foremost, it’s the price. I am aware that all Headless CMS providers have a free or starter tier, each with their own defined limitations whether that be the number of API requests or content limits. I like to look into the future and see where my online presence may take me and at some point, I would need to consider the cost of a paid tier. How does that fit into my yearly hosting costs?

    At the moment, I am paying £80 yearly. If I were to jump onto headless, the cheapest price I’ve seen equates to £66 a year and I haven’t factored in hosting costs yet. I could get away with low-cost hosting as my web build will be on a smaller scale and I plan my next build using the .NET Core framework.

    If I had my own company or product where I was looking for ways to deliver content across multiple channels, I would use headless in a heartbeat. I could justify the cost as I know I would be getting my money’s worth and if I were to find myself exceeding a tiers limit I could just move onto the next.

    Infrastructure and Stability

    Infrastructure and stability of a headless service all come down to how much you’re willing to pay. The API uptime is the most important part after the platform features. I’ve noticed that some starter and free tiers do not state an uptime, for example, 99.9% or 99.5%. Depending on the technology stack, this might not be an issue where a constant connection to the API is required, for example, Gatsby.

    I do think in this area where Headless CMS wins, is the failover and backup procedures in place. They would more than likely surpass the infrastructure in place from a personally hosted and managed site.

    Platform Changes

    It’s natural for improvements and changes to be made throughout the lifespan of a product. The only thing with headless you don’t have a choice on whether you want those changes as what works for one person may not necessarily work for another. You are locked into the release cycle.

    I remember back in the early days when Headless CMS’s started to gain traction, releases were being made in such a quick turnaround at the expense of the editors who had to quickly adapt to the subtle changes in features. The good thing now is the dust has settled as the platform has gotten to the point of maturity.

    The one area I still have difficulty getting over is the rich-text area. Each headless CMS provider seems to have their restrictions and you never really get full control over HTML markup unless a normal text area is used. There are ways around this but some restrictions still do exist.

    Where do you as an individual fit into this lifecycle? That’s the million-dollar question. However, there is one headless platform that is very involved with feedback from their users, Kentico Kontent, where all ideas are put under consideration, voted on and (if successful) submitted into the roadmap. I haven’t seen this approach offered by other Headless CMS platforms and maybe this is something they should also do.

    Trust

    There is a trust aspect to an external provider storing your content. Data is your most valuable asset. Is there any chance in the service being discontinued at some-point? If I am being totally honest to myself, I don’t think this is a valid point as long as the chosen platform has proven it’s worth and cemented itself over a lengthy period of time. Choose the big players (in no particular order), such as:

    • Kontent
    • Contentful
    • Prismic
    • DatoCMS
    • ButterCMS

    There is also another aspect to trust that draws upon a further point I made in the previous section regarding platform changes. In the past, I’ve seen content features getting deprecated. This doesn’t break your current build, just causes you to rethink things when updating to the newer version API interface.

    Conclusion

    I think moving to a Headless CMS requires a bigger leap than I thought. I say this purely from a personal perspective. The main piece of work would be to carry out content modelling for pages, migrate all my site content and media into the new platform and apply page redirects. This is before I have made a start in developing the new site.

    I will always be in two minds on whether I should use a Headless CMS. If I wasn’t such a control-freak when it comes to every aspect of my application and content, I think I could make the move. Maybe I just need to learn to let go.

  • When using the “Deploy to Azure Database” option in Microsoft SQL Management Studio to move a database to Azure, you may sometimes come across the following error:

    Error SQL71562: Error validating element [dbo].[f]: Synonym: [dbo].[f] has an unresolved reference to object [server].[database_name].[table_name]. External references are not supported when creating a package from this platform.

    These type of errors are generated as you cannot setup a linked server in Azure and queries using four-part [server].[database].[schema].[table] references are not supported. I’ve come across a SQL71562 error in the past, but this one was different. Generally, the error details are a lot more helpful and relates to stored procedures or views where a table path contains the database name:

    Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name]

    Easy enough to resolve. The error I was getting this time threw me as it didn’t point me to any object in the database to where the conflict resides and would require me to look through all possible database objects. This would be easy enough to do manually on a small database, but not a large database consisting of over 50 stored procedures and 30 views. Thankfully, SQL to the rescue...

    To search across all stored procedures and views, you can use the LIKE operator to search against the database’s offending system objects based on the details you can gather from the error message:

    -- Stored Procedures
    SELECT OBJECT_NAME(object_id),
           OBJECT_DEFINITION(object_id)
    FROM sys.procedures
    WHERE OBJECT_DEFINITION(object_id) LIKE '%[database_name]%'
    
    -- Views
    SELECT OBJECT_NAME(object_id),
           OBJECT_DEFINITION(object_id)
    FROM sys.views
    WHERE OBJECT_DEFINITION(object_id) LIKE '%[database_name]%'
    
  • I have some websites on a production environment that need to be run from within a subdirectory and in order to carry out proper testing during development to ensure all references to CSS, JS and images files work. By default, when a .NET Core site is run from Visual Studio it will always start from the root, resulting in a broken looking page.

    From .NET Core 2.0, within your Startup.cs file, you can set a sub-directory using the UsePathBase extension within the Configure method:

    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
            app.UsePathBase("/mydirectory");
    }
    

    Now when the site runs, it’ll be accessible from /mydirectory. In my code example, I only want to set the path base if it development mode. When released to production, the path will be configured at IIS level.

    The only annoyance is when you run the site in Visual Studio, it will still start at the root and not at your newly declared subdirectory. I was surprised to see that the site is still accessible at the root, when you would expect the root path to be disabled or even greeted with a 404 response.

    On first glance, I thought there was a bug in my path base declaration and perhaps I missed something. After viewing a closed Github issue raised back in 2017, it was stated that this is in fact the intended functionality. This is a minor bug bear I can live with.

  • Published on
    -
    11 min read

    Maldives and Vilamendhoo Island Resort

    Writing about my holiday has started to become a bit of a tradition (for those that are worthy of such time and effort!) which seem to start when I went to Bali last year. I find it's a way to pass the time in airports and flights when making the return journey home. So here's another one...

    Sun, sea, sand and a beach facing hut with an open roofed bathroom... Yes, I have arrived at the Vilamendhoo Resort tucked away amongst the many other beautiful islands of the Maldives.

    I’m not supposed to be here...

    Vilamendhoo Island Resort - Entrance

    If I told you our holiday location and resort was in fact booked by mistake, you’d probably think I was lying. But that’s exactly what happened. Originally, the holiday was supposed to be at the Preskill Resort in Mauritius. I can almost understand the travel agent getting “Mauritius” and “Maldives” mixed up, but how can the resort name “Preskill” get misconstrued with “Vilamendhoo”? Yes, I haven’t quite figured out this conundrum myself. Obviously, the travel agent had other plans and we were both talking on deaf ears. In fact, I have a tendency to fall deaf on hearing two words: “All inclusive”. :-)

    It was only upon being emailed the confirmation and itinerary of the holiday, I did not get what I asked for. But being the type of person who is a great believer that there are no mistakes and everything happens for a reason through the twists and turns in the grand journey we call life, I carried out some research on my new destination and was indeed impressed what my I saw.

    The Maldives

    I’ve seen the Maldives through the visual haze of TV, internet and brochures. - All of which never truly sold the destination to me as I deemed everything always seemed too perfect. I think we all have experienced in the past where promotional pictures look nothing like the real thing. Thus never really planned on ever making a visit any time soon.

    Vilamendhoo Island Resort - Shoreline

    This could not be further from the truth.

    The Maldives is one of the god given gems that needs to be witnessed in reality. This only becomes apparent when you hop onto the propella powered plane from Male airport as you soar over the infinite blue seas and islands naturally formed in all shapes and sizes. Your eyes are in for a visual treat and will salivate in glee!

    The only variable is the accommodation you come to choose and Vilamendhoo did not disappoint.

    Getting there

    I was flying from the UK which encompassed:

    • Flight from Gatwick to Doha.
    • Connecting flight from Doha to Male International Airport.
    • A Maldivian domestic flight.
    • Small bus ride to the coast.
    • Speed boat ride that has multiple drop-off points to other island resorts depending on the number of people you’re with.

    Due to my lack of geographic knowledge and overall preparedness for this trip alone, I was quite naive in comprehending the effort involved. Getting to Vilamendhoo could have been a lot more straight-forward if I opted for a sea plane transfer from Male, something that only came to light after speaking to other holiday-goers. Paying extra for sea plane transfer alone saves you around an hour (excluding waiting time) of additional travel time.

    Villa Air Flight

    Male airport is the central hub and  for providing various transport to the final leg of your trip depending on where your final destination resides. Trying to find the connecting transport was a little confusing and lacked clarity. But the people there are more than happy to guide and help with queries.

    It’s a no thrills airport that lacks free Wi-Fi (a standard staple by todays standards) where you’ll need to make your own entertainment, so make sure you get some films and books downloaded prior to pass the time. Or, you could get your first glimpse of what is in store by sitting outside and watching the sea planes and yachts pass by as they float over the blue sea.

    Due to the effort alone to get here, I would recommend you plan on staying in the Maldives for a minimum of 10 days to make it more worthwhile. Next time I plan on staying longer... if my work lets me. ;-)

    Greetings - Vilamendhoo Style!

    Upon arriving at the dock by speed boat, you hear sounds of a beating drum as you are greeted by one of the representatives and make your short walk onto the island. After some quick form filling and introduction on over a refreshing mocktail at reception, we’re good to head on over to our accommodation.

    No Shoes, No News

    What I found immediately different at Vilamendhoo compared to the other resorts I’ve stayed at before is their motto: no shoes, no news, which is evident both by the workers and holiday-goers. You can literally walk around the island, enter reception, restaurants in bare feet! All floors surfaces (excluding spa and your hut) have a thick layer of sand. It’s just like walking on a beach everywhere you go! Being the Indian I am, I still walked around with flip-flops as some areas were a little to rough for my sensitive feet.

    Accommodation - Beach Villa

    Vilamendhoo has varied accommodation based on where you’d like to be situated on the island and most importantly price. There are four options to choose from (ordered by cheapest):

    1. Garden Room
    2. Beach Villa
    3. Jacuzzi Beach Villa
    4. Jacuzzi Water Villa

    The Garden Room and Beach Villa offer the same amenities with the main difference being the location as highlighted by the name. If you can afford to pay a little extra definitely go for the Beach Villa, you won’t regret it. How can you afford not to? You basically have a little slice of your own beach front paradise meters away from the sea. I found this to be the most exceptional location if you’re fond of snorkelling.

    When I think back to my last years holiday in Bali where to access the beach was a 5-10 minute walk, the Beach Villa’s close proximity is a priceless gift that takes the headache out of a casual swim in the sea. It’s also worth noting at this point, you won’t be hassled by anyone trying to sell you anything whilst you lounge on the beach, which was a regular occurrence in Bali.

    Vilamendhoo Island Resort - Jacuzzi Water Villa

    Talking about the Water Villa’s would cause me too much mental anguish as I have seen how truly amazing they are situated in a prime location over the water in the lagoon. The Beach Villa’s might be close to the sea, but the Water Villa’s takes things to the next level with private sundeck and stairs to the sea. Who knows... maybe next time?

    I stayed in one of the many Beach Villa’s dotted around the coast of the island. My Beach Villa faced the south side where you will see much boat activity and if you gaze further into the distance you’ll see Dhangethi island (it’s the one with a massive satellite aerial sticking up). The upside of this location is its closeness to the reception, restaurant and bar. However, I would recommend the north side with its calmer waters due to little to no boat activity and its slightly lusher sands (I swear it feels different!).

    The villa is clean, simple and very comfortable with its tropical decor, controllable air conditioning and king size bed. At 55 sqm you have a lot of space to move around and at no time do you feel cramped. TV is an important factor wherever I go on holiday for times when I just want to do nothing. Luckily, you are provided with a 32 inch cable TV with access to a wide range of channels. I found myself watching a film every night before bed. Now thats what I call luxury!

    One thing you’ll find very different is the bathroom. It’s slightly outdoors, or “open air” as Vilamendhoo like to call it. So you’ll find yourself either looking up at the sky, surrounding trees or the mini garden that forms part of the bathroom during a shower or toilet time. It feels very liberating! You’re literally one with nature as nature calls.

    Vilamendhoo Island Resort - Bathroom Garden

    The included shower gels, shampoo and creams are exceptional. I’m an absolute snob when it comes to resort/hotel provided washes as they are just dirt cheap, but I was pleasantly surprised by the naturally derived Healing Earth collection. So feel free to leave your own toiletries behind as (like myself) more than likely will not be using them.

    Dining - Food Glorious Food!

    If you’re an “all inclusive” customer you will not go hungry. It’s worth every penny - I’ll be talking more about this further down in this post.

    The main dining area is the Funama Restaurant housed on the south-side at the centre of the island, making it easily accessible wherever your hut is.

    On your first day entering the dining area you will be given a table that will be your assigned seating throughout your stay which I originally thought would be quite restrictive. I was wrong. It’s efficient and you get a sense of familiarity with the staff who serve you. It’s at this point I have to make a shout out to our waiter - Ilham. He provided first class service, gave me a plaster for my cut toe (yes!) and shared his wealth of knowledge of the resort and made recommendations throughout our stay. Miss that guy! He deserves a promotion!

    The variety of food on offer is astounding that spans across breakfast, lunch and dinner. It’s a buffet each day. Every evening the food on offer is based on a theme, such as (to name a few):

    • Maldivian
    • Indian
    • Italian
    • Chinese

    If you’re not too keen on an evenings theme, there are still other dishes on offer that might be to your liking. It’s impossible for you not to be content with whats on offer.

    The food is outstanding across all meal time with a lot of variety. We were tempted to dine at the Asian Wok, but the quality buffet suited us perfectly. All the chefs really do deserve a lot of gratitude just due to the sheer quality and quantity of the food. The service runs like clockwork. As soon as one dish is all consumed in the buffet area, it’s quickly refilled.

    The desserts are pure bite-sized art forms.

    Vilamendhoo Island Resort - Desserts

    From looking at a handful of other reviews, some have commented on the ambience of the dining hall and the dim lighting. Ignore the naysayers. I quite like it!

    Excursions

    Being my first time visiting Vilamendhoo Island and the Maldives, I didn’t actually partake in many excursions for the sole reason of feeling an overwhelming sense of contentment. You are surrounded by serene beauty wherever you look. Judging by the serendipitous chain of events where the stars aligned, going to the Maldives has been a long time in the coming - I just didn’t know it.

    The handful of excursions I did get involved in consisted of:

    • Kayaking
    • Tour of Dhangethi Island
    • Spa (can I class this as a excursion?)
    • Sunset Punch Cruise

    Kayaking

    If you’re not 100% confident in going further out to sea or extreme water sports, kayaking is the way to go. You can take your time and go as far out to sea as you feel able to within the 45 minute session. Vilamendhoo Island is deceivingly small and if (like myself) you decided to kayak your way around the island, this can be done in around 30 minutes with time to spare for a little messing around.

    Tour of Dhangethi Island

    Dhangethi - Boat HullDhangethi is a is a 20-30 minute boat ride away - the island with a massive phone antenna you can see over the horizon from the south-side of Vilamendhoo.

    The island is home to a small population of Maldivians with its primary infrastructure involving fishing, construction, boat building and tourism. Some areas you will have the chance to see for yourself during the guided tour of the island along with the local schools and hospital. For me the highlight was viewing an under construction hull of a ship. An image doesn’t give justice on the sheer scale.

    At the time of the tour I wasn’t very impressed with the island visit probably because I was expecting something different, but my opinion has somewhat changed as I look back and adjusted my expectations. It was a fascinating insight looking into the culture of the locals and history.

    The only thing I would say is the trip was 30 minutes too long. Myself and others found ourselves lingering near the dock waiting to leave.

    I wasn’t too interested in the souvenir shops as there wasn’t anything of interest to me, however I heard the goods were a lot cheaper when compared to the Vilamendhoo’s own shop.

    Sunset Punch Cruise

    The Sunset Punch Cruise is a late afternoon boat trip that takes you further out to sea and immerse yourself in the tropical sunset with a glass of Vilamendhoo’s “special (non-alcoholic) punch”. You may even encounter some dolphins along the way. It was uncertain to whether we’d actually see any dolphins as the cruise from the previous day had no such encounter so our expectations were kept to a minimum.

    I am glad to report that luck was on our side and what we experienced wasn’t just a brief encounter, it was a close encounter of the third kind! Not only did we see shoal of dolphins but they were also swimming up close interweaving around the bow of the ship for quite sometime in all their splendour, to the amazement and awe to all on the cruise. Sometimes I question myself on whether I really did see those dolphins and I wasn’t on some form of hallucinogenic trip from the “special punch”. Thankfully, I can confirm I was of sound mental state as I have photographic evidence such an experience did exist. :-)

    Even though the likelihood of seeing the dolphins is little hit or miss, I would still recommend this excursion just to have a chance of seeing these wonderful creatures in a close proximity.

    Go All Inclusive!

    Vilamendhoo Island Resort - Asian Wok Restaurant At Night

    If you are able to get an “All Inclusive” package at a good price, just go for it! It’s freedom! You know from a cost perspective you will not have to spend a penny more as long as you stay within the confines of the inclusive package. All in all, it’s a good value as you will also get some excursions free or at a discounted rate.

    As humans we have a natural instinct of wanting more and there is always a hint of disappointment when you find some red tape when in this case “All Inclusive” has some limitations. I would list the limitations out but they are so minor it isn’t really worth mentioning.

    It would be nice is Asian Wok was part of the inclusive package or offered a few meals, but I can understand why they don’t offer this as it lacks the table capacity unlike Funama Restaurant.

    Final Thoughts

    Going to the Maldives is what I can only call a proper holiday. You have an excuse to just relax and take life at the pace you wish. You could spend all your days being very active scuba diving, windsurfing and snorkelling through the crystal clear waters. Or just sit back on the sun lounger with a couple drinks to hand letting the days pass by. Generally, when on holiday I find myself feeling the need to explore the local surroundings to make the most of it - not here. While away your days guilt free!

    Vilamendhoo is small island of paradise that puts a spell on you from the very first moment you step onto the dock to the sound of the welcoming beat of the drum.

  • There are times when what I want to express does not form into words, which is very much unlike me if I look back at my journey through blogging. I'm noticing more so than ever that writers block is becoming a regular occurrence resulting in a lack the energy to write my thoughts on subjects of interest.

    One blogs not just for others, but for themselves!

    I sometimes question if a post is worth the time it takes to write as it might not even be of interest to anyone. This in itself is not the right attitude. One blogs not just for others, but most importantly for themselves! This is what I have to keep telling myself during times of self doubt. I have always had the opinion that if I manage to help just one person from one of my posts, then it's truly a job well done!

    Every blogger has a process they go through before publishing a post. I have the problem of just wanting to get a post out as quickly as humanly possible just to see the end result, to the detriment of quality. Over the last few years, this small site of mine has gained traction from both readers and search engines (the stats speak for themselves) and it is during this time I constantly fight to reign myself in to ensure the content I put out is up to the mark. Maybe I am putting too much pressure on focusing on the numbers (Google Analytics, Adsense, etc) than words.

    I look at my blogging heroes like Scott Hanselman, Troy Hunt, Mosh Hamedani and Iris Classon (to name a few) and at times ponder if I will have the ability to churn out great posts on a regular basis with such ease and critical acclaim as they do. Do they even experience writers block? What is their writing process?

    As for my process, it’s changed somewhat. Writing has become more of a special event rather than on an ad-hoc basis, where I now schedule time within the comfort of my new office setup (still need to blog about that!) a couple of times weekly to write and plan future posts. In addition, before getting into the nitty-gritty detail, I’ve learnt to create a skeletal structure first to outline a posts milestones. I’ve ended up doing this across the initial stages of many posts, as I gather my thought processes.

    The new approach has also made writing less daunting and more manageable as I am not just focusing all my efforts on producing a single post alone. I literally have an Evernote notebook created specifically with a collection of post ideas. Some bear fruit, some don’t.

    I like to end this post on a positive note. The upside of this situation is I know deep down writing is a release for me and it’s not something I could ever grow tired of. Yes, it can be frustrating at times, but I will continue to write when I can and even more so when I can’t. It’ll show progress and how far I’ve come.

  • Making the transition in moving photos from physical to digital form can be quite an undertaking depending on the volume of photos you have to work with. Traditional flat-bed scanning and Photoshop combinations aren’t really up to the task if you want a process that requires minimal manual intervention. It can all be quite cumbersome, from placing the photo correctly on the scanner to then carrying out any photo enhancements, cropping and exporting. Yes, you get a fantastic digital print but it comes at a cost - time.

    If you are really serious in digitising a bulk load of photos, there are a couple viable options:

    1. Photo-scanning service where you post all the photos you wish to digitise. The costs can be relatively low (around 1p per photo) and is good if you have a specific number of photo’s to digitise.
    2. Purchase a photo scanner where photos are scanned manually in a document feeding process, which makes for a less intensive job.

    Due to the large number of photos that have accumulated over the years, I preferred to purchase a photo scanner. Sending off photos to a photo-scanning service didn’t seem viable and could prove quite costly. I also had the fear of sending over photos via post where I do not have the original negatives. They could be lost in transit or handled incorrectly by the photo-scanning service. Not a risk I was willing to take. Photos are precious memories - a snapshot of history.

    The most ideal photo scanner for a job of this undertaking needs to be sheet-fed, where the photos are fed through a scanning mechanism. There are quite a number of these type of scanners, mostly being document scanners, which isn’t the type of scanner you want. From personal experience I found document scanners lack the resolution required and the feeding mechanism can be quite rough on photos.

    I decided to go for the Plustek ePhoto Z300 as it seems to fit the bill at a really good price (at time to writing £170).

    Initial Impressions

    The Plustek scanner doesn’t look like a scanner you’ve ever seen and almost looks other worldly. Due to its upright position, it requires very little real-estate on your desk when compared to a flat-bed scanner.

    All functions are performed from the software you can download from the Plustek site or via the CD provided in the box. Once the software is installed and scanner calibrated you’re good to go.

    Software

    I’m generally very reluctant to install software provided directly by hardware manufacturers as they encompass some form of bloatware and prefer a minimum install of just the drivers. The software provided by Plustek is very minimal and does exactly what it says on the tin - no thrills!

    Just to be sure you’re running the most up-to-date software, head over to the Plustek site.

    When your photos are scanned you’ll be presented with thumbnails in the interface where you can export a single or group selection of images to the following formats:

    • JPG
    • PDF
    • PNG
    • TIFF
    • Bitmap

    I exported all my scans to JPEG in high quality.

    There is a slight bug-bare with the Mac OS version of the software as it doesn't seem to be as stable as its Windows counterpart. This only became apparent after installing the software on my Dad’s computer running on Windows. I noticed when you have collected quite a few scans, the Mac OS version seems to lag and crash randomly, something that doesn’t seem to occur on a Windows machine. This is very annoying after you’ve been scanning over a 100 photos.

    The hardware specifications on both machines are high running on i7 processors and 16GB of RAM, so the only anomaly is the software itself. A more stable Mac OS version of the scanning software would be welcome. In the meantime, I would recommend Mac users to regularly save small batches of their scans.

    The Scanning Process

    The speed of scanning varies depending on the resolution set from within the software, where you have either 300 or 600 dpi to choose from. I scanned all my prints at 600 dpi, which taken around 15 seconds to scan each 4x6 photo, whereas 300 dpi was done in a matter of seconds. I wanted to get to the best resolution for my digitised photos and thought it was worth the extra scanning time opting for 600 dpi.

    Even though Plustek ePhoto Z300 is a manually fed scanner, I was concerned that I would have to carry out some form of post-editing in the software. By enabling "Auto crop and auto deskew” and “Apply quick fix” within the scan settings, all my photos were auto-corrected very well even when accidentally feeding a photo a that wasn’t quite level.

    To save time in correcting the rotation of your images post-scan, just always ensure you feed the photos top first.

    Conclusion

    The Plustek Scanner performs very well both on price and performance. I have been pretty happy with the quality when scanning photos in either black and white or colour.

    The only thing that didn’t come to mind at time of purchase is scanning is a very manual process, especially when churning through hundreds of photos. It would be great if Plustek had another version of the Z300 that encompassed an automatic feeding mechanism. There were times when I would feed in the next photo before the currently scanned photo had finished, resulting in two photos scanned into one. This didn’t become a regular occurrence once you have got into the flow of the scanning process.

    Not having an automatic feeding mechanism is not at all a deal breaker at this price. You get a more than adequate photo scanner that makes the tedious job of digitising batches of photos somewhat surmountable.

  • Published on
    -
    5 min read

    Generate Code Name For Tags In Kentico

    With every Kentico release that goes by, I am always hopeful that they will somehow add code name support to Tags where a unique text-based identifier is created, just like Categories (via CategoryName field). I find the inclusion of code names very useful when used in URL as wildcards to filter a list of records, such as blog posts.

    In a blog listing page, you'll normally have the ability to filter by both category or tag and to make things nice for SEO, we include them in our URLs, for example:

    • /Blog/Category/Kentico
    • /Blog/Tag/Kentico-Cloud

    This is easy to carry out when dealing with categories as every category you create has "CategoryName" field, which strips out any special characters and is unique, fit to use in slug form within a URL! We're not so lucky when it comes to dealing with Tags. In the past, to allow the user to filter my blog posts by tag, the URL was formatted to look something like this: /Blog/Tag/185-Kentico-Cloud, where the number denotes the Tag ID to be parsed into my code for querying.

    Not the nicest form.

    The only way to get around this was to customise how Kentico stores its tags on creation and update, without impacting its out-of-the-box functionality. This could be done by creating a new table that would store newly created tags in code name form and link back to Kentico's CMS_Tag table.

    Tag Code Name Table

    The approach on how you'd create your table is up to you. It could be something created directly in the database, a custom table or module. I opted to create a new class name under one of my existing custom modules that groups all site-wide functionality. I called the table: SurinderBhomra_SiteTag.

    The SurinderBhomra_SiteTag consists of the following columns:

    • SiteTagID (int)
    • SiteTagGuid (uniqueidentifier)
    • SiteTagLastModified (datetime)
    • TagID (int)
    • TagCodeName (nvarchar(200))

    If you create your table through Kentico, the first four columns will automatically be generated. The "TagID" column is our link back to the CMS_Tag table.

    Object and Document Events

    Whenever a tag is inserted or updated, we want to populate our new SiteTag table with this information. This can be done through ObjectEvents.

    public class ObjectGlobalEvents : Module
    {
        // Module class constructor, the system registers the module under the name "ObjectGlobalEvents"
        public ObjectGlobalEvents() : base("ObjectGlobalEvents")
        {
        }
    
        // Contains initialization code that is executed when the application starts
        protected override void OnInit()
        {
          base.OnInit();
    
          // Assigns custom handlers to events
          ObjectEvents.Insert.After += ObjectEvents_Insert_After;
          ObjectEvents.Update.After += ObjectEvents_Update_After;
        }
    
        private void ObjectEvents_Insert_After(object sender, ObjectEventArgs e)
        {
          if (e.Object.TypeInfo.ObjectClassName.ClassNameEqualTo("cms.tag"))
          {
            SetSiteTag(e.Object.GetIntegerValue("TagID", 0), e.Object.GetStringValue("TagName", string.Empty));
          }
        }
    
        private void ObjectEvents_Update_After(object sender, ObjectEventArgs e)
        {
          if (e.Object.TypeInfo.ObjectClassName.ClassNameEqualTo("cms.tag"))
          {
            SetSiteTag(e.Object.GetIntegerValue("TagID", 0), e.Object.GetStringValue("TagName", string.Empty));
          }
        }
    
        /// <summary>
        /// Adds a new site tag, if it doesn't exist already.
        /// </summary>
        /// <param name="tagId"></param>
        /// <param name="tagName"></param>
        private static void SetSiteTag(int tagId, string tagName)
        {
          SiteTagInfo siteTag = SiteTagInfoProvider.GetSiteTags()
                                .WhereEquals("TagID", tagId)
                                .TopN(1)
                                .FirstOrDefault();
    
          if (siteTag == null)
          {
            siteTag = new SiteTagInfo
            {
              TagID = tagId,
              TagCodeName = tagName.ToSlug(), // The .ToSlug() is an extenstion method that strips out all special characters via regex.
            };
    
            SiteTagInfoProvider.SetSiteTagInfo(siteTag);
          }
        }
    }
    

    We also need to take into consideration when a document is deleted and carry out some cleanup to ensure tags no longer assigned to any document are deleted from our new table:

    public class DocumentGlobalEvents : Module
    {
        // Module class constructor, the system registers the module under the name "DocumentGlobalEvents"
        public DocumentGlobalEvents() : base("DocumentGlobalEvents")
        {
        }
    
        // Contains initialization code that is executed when the application starts
        protected override void OnInit()
        {
          base.OnInit();
    
          // Assigns custom handlers to events
          DocumentEvents.Delete.After += Document_Delete_After;
        }
    
        private void Document_Delete_After(object sender, DocumentEventArgs e)
        {
          TreeNode doc = e.Node;
          TreeProvider tp = e.TreeProvider;
    
          GlobalEventFunctions.DeleteSiteTags(doc);
        }
    
        /// <summary>
        /// Deletes Site Tags linked to CMS_Tag.
        /// </summary>
        /// <param name="tnDoc"></param>
        private static void DeleteSiteTags(TreeNode tnDoc)
        {
          string docTag = tnDoc.GetStringValue("DocumentTags", string.Empty);
    
          if (!string.IsNullOrEmpty(docTag))
          {
            foreach (string tag in docTag.Split(','))
            {
              TagInfo cmsTag = TagInfoProvider.GetTags()
                               .WhereEquals("TagName", tag)
                               .Column("TagCount")
                               .FirstOrDefault();
    
              // If the the tag is no longer stored, we can delete from SiteTag table.
              if (cmsTag?.TagCount == null)
              {
                List<SiteTagInfo> siteTags = SiteTagInfoProvider.GetSiteTags()
                                     .WhereEquals("TagCodeName", tag.ToSlug())
                                     .TypedResult
                                     .ToList();
                if (siteTags?.Count > 0)
                {
                  foreach (SiteTagInfo siteTag in siteTags)
                    SiteTagInfoProvider.DeleteSiteTagInfo(siteTag);
                }
              }
            }
          }
        }
    }
    

    Displaying Tags In Page

    To return all tags linked to a page by its "DocumentID", a few of SQL joins need to be used to start our journey across the following tables:

    1. CMS_DocumentTag
    2. CMS_Tag
    3. SurinderBhomra_SiteTag

    Nothing Kentico's Object Query API can't handle.

    /// <summary>
    /// Gets all tags for a document.
    /// </summary>
    /// <param name="documentId"></param>
    /// <returns></returns>
    public static DataSet GetDocumentTags(int documentId)
    {
      DataSet tags = DocumentTagInfoProvider.GetDocumentTags()
                        .WhereID("DocumentID", documentId)
                        .Source(src => src.Join<TagInfo>("CMS_Tag.TagID", "CMS_DocumentTag.TagID"))
                        .Source(src => src.Join<SiteTagInfo>("SurinderBhomra_SiteTag.TagID", "CMS_DocumentTag.TagID"))
                        .Columns("TagName", "TagCodeName")
                        .Result;
    
      if (!DataHelper.DataSourceIsEmpty(tags))
        return tags;
    
      return null;
    }
    

    Conclusion

    We now have our tags working much like categories, where we have a display name field (CMS_Tag.TagName) and a code name (SurinderBhomra_SiteTag.TagCodeName). Going forward, any new tags that contain spaces or special characters will be sanitised and nicely presented when used in a URL. My blog demonstrates the use of this functionality.