Hubspot CMS for Marketers Certified

Since around September last year, I've been involved in a lot of Hubspot projects at my place of work - Syndicut. It's the latest edition to the numerous other platforms that are offered to clients.

The approach to developing websites in Hubspot is not something I'm used to coming from a programming background where you build everything custom using some form of server-side language. But I was surprised by what you can achieve within the platform.

Having spent months building sites using the Hubspot Markup Language (HUBL), utilising a lot of the powerful marketing features and using the API to build a custom .NET Hubspot Connector, I thought it was time to attempt a certification focusing on the CMS aspect of Hubspot.

There are two CMS certifications:

  1. Hubspot CMS for Marketers
  2. Hubspot CMS for Developers

I decided to tackle the "CMS for Marketers" certification first as this mostly covers the theory aspect on how you use Hubspot to create a user-friendly, high-performing website and leveraging that with Hubspot CRM. These are the areas you can get quite shielded from if you're purely just developing in pages and modules. I thought it would be beneficial to expose myself from a marketing standpoint to get an insight into how my development forms part of the bigger picture.

I'm happy to report I am now Hubspot CMS for Marketers certified.

Hubspot CMS for Marketers Certification

Adding Security Headers In Netlify

I normally like my last blog post of the year to end with a year in review. In light of being in Tier 4 local restrictions, there isn't much to do during the festive period unlike previous years. So I have decided to use this time to tinker around with various tech-stacks and work my own site to keep me busy.

Whilst making some efficiency improvements under-the-hood to optimise my sites build and loading times, I randomly decided to check the security headers on securityheaders.com and to my surprise received a grade 'D'. When my site previously ran on the .NET Framework, I managed to secure things down to get graded an 'A'. I guess one of my misconceptions on moving to a statically-generated site is there isn't a need. How wrong I was.

A dev.to post by Matt Nield explains why static sites need basic security headers in place:

As you add external services for customer reviews, contact forms, and eCommerce integration etc., we increase the number of possible vulnerabilities of the application. It may be true that your core data is on accessed when you rebuild your application, but all of those other features added can leave you, your customers, and your organisation exposed. Being frank, even if you don't add external services there is a risk. This risk is easily reduced using some basic security headers.

Setting security headers on a Netlify hosted site couldn't be simpler. If like me, your site is built using GatsbyJS, you simply need to add a _headers file in the /static directory containing the following header rules:

/*
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Referrer-Policy: no-referrer
X-Content-Type-Options: nosniff
Content-Security-Policy: base-uri 'self'; default-src 'self' https: ; script-src 'self' 'unsafe-inline' https: ; style-src 'self' 'unsafe-inline' https: blob: ; object-src 'none'; form-action 'self' https://*.twitter.com; font-src 'self' data: https: ; connect-src 'self' https: ; img-src 'self' data: https: ;
Feature-Policy: geolocation 'self'; midi 'self'; sync-xhr 'self'; microphone 'self'; camera 'self'; magnetometer 'self'; gyroscope 'self'; fullscreen 'self'; payment 'self'

When adding a "Content-Security-Policy" header be sure to thoroughly re-check your site as you may need to whitelist resources that are loaded from a different origin. For example, I had to make some tweaks specifically to the "Content-Security-Policy" to allow embedded Tweets to render correctly.

My site is now back to its 'A' grade glory!

Useful Links

My New Process For Dynamically Generating Social Share Images

I’ll be the first to admit that I very rarely (if at all!) assign a nice pretty share image to any post that gets shared on social networks. Maybe it’s because I hardly post what I write to social media in the first place! :-) Nevertheless, this isn’t the right attitude. If I am really going to do this, then the whole process needs to be quick and render a share image that sets the tone before that will hopefully entice a potential reader to click on my post.

I started delving into how my favourite developer site, dev.to, manages to create these really simple text-based share images dynamically. They have a pretty good setup as they’ve somehow managed to generate a share image that contains relevant post related information perfectly, such as:

  • Post title
  • Date
  • Author
  • Related Tech Stack Icons

For those who are nosey as I and want to know how dev.to undertakes such functionality, they have kindly written the following post - How dev.to dynamically generates social images.

Since my website is built using the Gatsby framework, I prefer to use a local process to dynamically generate a social image without the need to rely on another third-party service. What's the point in using a third-party service to do everything for you when it’s more fun to build something yourself!

I had envisaged implementing a process that will allow me to pass in the URL of my blog posts to a script, which in turn will render a social image containing basic information about a blog post.

Intro Into Puppeteer

Whilst doing some Googling, one tool kept cropping up in different forms and uses - Puppeteer. Puppeteer is a Node.js library maintained by Google Chrome’s development team and enables us to control any Chrome Dev-Tools based browser through scripts. These scripts can programmatically execute a variety of actions that you would generally do in a browser.

To give you a bit of an insight into the actions Puppeteer can carry out, check out this Github repo. Here you can see Puppeteer is a tool for testing, scraping and automating tasks on web pages. It’s a very useful tool. The only part I spent most of my time understanding was its webpage screenshot feature.

To use Puppeteer, you will first need to install the library package in which two options are available:

  • Puppeteer Core
  • Puppeteer

Puppeteer Core is the more lighter-weight package that can interact with any Dev-Tool based browser you already have installed.

npm install puppeteer-core

You then have the full package that also installs the most recent version of Chromium within the node_modules directory of your project.

npm install puppeteer

I opted for the full package just to ensure I have the most compatible version of Chromium for running Puppeteer.

Puppeteer Webpage Screenshot Script

Now that we have Puppeteer installed, I wrote a script and added it to the root of my Gatsby site. The script carries out the following:

  • Accepts a single argument containing the URL of a webpage. This will be the page containing information about my blog post in a share format - all will become clear in the next section.
  • Approximately screenshot a cropped version of the webpage. In this case 840px x 420px - the exact size of my share image.
  • Use the page name in the URL as the image file name.
  • Store the screenshot in my "Social Share” media directory.
const puppeteer = require('puppeteer');

// If an argument is not provided containing a website URL, end the task.
if (process.argv.length !== 3) {
  console.log("Please provide a single argument containing a website URL.");
  return;
}

const pageUrl = process.argv[2];

const options = {
    path: `./static/media/Blog/Social Share/${pageUrl.substring(pageUrl.lastIndexOf('/') + 1)}.jpg`,
    fullPage: false,
    clip: {
      x: 0,
      y: 0,
      width: 840,
      height: 420
    }
  };
  
  (async () => {
    const browser = await puppeteer.launch({headless: false});
    const page = await browser.newPage()
    await page.setViewport({ width: 1280, height: 800, deviceScaleFactor: 1.5 })
    await page.goto(pageUrl)
    await page.screenshot(options)
    await browser.close()
  })(); 

The script can be run as so:

node puppeteer-screenshot.js http://localhost:8000/socialcard/Blog/2020/07/25/Using-Instagram-API-To-Output-Profile-Photos-In-ASPNET-2020-Edition

I made an addition to my Gatsby project that generated a social share page for every blog post where the URL path was prefixed with /socialcard. These share pages will only be generated when in development mode.

Social Share Page

Now that we have our Puppeteer script, all that needs to be accomplished is to create a nice looking visual for Puppeteer to convert into an image. I wanted some form of automation where blog post information was automatically populated.

I’m starting off with a very simple layout taking some inspiration from dev.to and outputting the following information:

  • Title
  • Date
  • Tags
  • Read time

Working with HTML and CSS isn’t exactly my forte. Luckily for me, I just needed to do enough to make the share image look presentable.

Social Card Page

You can view the HTML and CSS on JSFiddle. Feel free to update and make it better! If you do make any improvements, update the JSFiddle and let me know!

Next Steps

I plan on adding some additional functionality allowing a blog post teaser image (if one is added) to be used as a background and make things look a little more interesting. At the moment the share image is very plain. As you can tell, I keep things really simple as design isn’t my strongest area. :-)

If all goes to plan, when I share this post to Twitter you should see my newly generated share image.

Delving Into The World of Gatsby and Static Site Generators

I have been garnering interest in a static-site generator architecture ever since I read Paul Stamatiou’s enlightening post about how he built his website. I am always intrigued to know what goes on behind the scenes of someone's website, especially bloggers and the technology stack they use.

Paul built his website using Jekyll. In his post, he explains his reasoning to why he decided to go down this particular avenue - with great surprise resonated with me. In the past, I always felt the static-site generator architecture was too restrictive and coming from a .NET background, I felt comfortable knowing my website was built using some form of server-side code connected to a database, allowing me infinite possibilities. Building a static site just seemed like a backwards approach to me. Paul’s opening few paragraphs changed my perception:

..having my website use a static site generator for a few reasons...I did not like dealing with a dynamic website that relied on a typical LAMP stack. Having a database meant that MySQL database backups was mission critical.. and testing them too. Losing an entire blog because of a corrupt database is no fun...

...I plan to keep my site online for decades to come. Keeping my articles in static files makes that easy. And if I ever want to move to another static site generator, porting the files over to another templating system won't be as much of a headache as dealing with a database migration.

And then it hit me. It all made perfect sense!

Enter The Static Site Generator Platform

I’ll admit, I’ve come late to the static site party and never gave it enough thought, so I decided to pick up the slack and researched different static-site generator frameworks, including:

  • Jekyll
  • Hugo
  • Gatsby

Jekyll runs on the Ruby language, Hugo on Go (invented by Google) and Gatsby on React. After some tinkering with each, I opted to invest my time in learning Gatsby. I was very tempted by Hugo, (even if it meant learning Go) as it is more stable and requires less build time which is important to consider for larger websites, but it fundamentally lacks an extensive plugin ecosystem.

Static Generator of Choice: Gatsby

Gatsby comes across as a mature platform offering a wide variety of useful plugins and tools to enhance the application build. I’m already familiar coding in React from when I did some React Native work in the past, which I haven’t had much chance to use again. Being built on React, it gave me an opportunity to dust the cobwebs off and improve both my React and (in the process) JavaScript skillset.


I was surprised by just how quickly I managed to get up and running. There is nothing you have to configure unlike when working with content-management platforms. In fact, I decided to create a Gatsby version of this very site. Within a matter of days, I was able to replicate the following website functionality:

  • Listing blog posts.
  • Pagination.
  • Filtering by category and tag.
  • SEO - managing page titles, description, open-graph tags, etc.

There I such a wealth of information and support online to help you along.

I am very tempted to move over to Gatsby.

When to use Static or Dynamic?

Static site generators isn’t a framework that is suited for all web application scenarios. It’s more suited for small/medium-sized sites where there isn't a requirement for complex integrations. It works best with static content that doesn’t require changes to occur based on user interaction.

The only thing that comes into question is the build time where you have pages of content in their thousands. Take Gatsby, for example...

I read one site containing around 6000 posts, resulting in a build time of 3 minutes. The build time can vary based on the environment Gatsby is running on and build quality. I personally try to ensure best case build time by:

  • Sufficiently spec'd hardware is used - laptop and hosting environment.
  • Keeping the application lean by utilising minimal plugins.
  • Write efficient JavaScript.
  • Reusing similar GraphQL queries where the same data is being requested more than once in different components, pages and views.

We have to accept the more pages a website has, the slower the build time will be. Hugo should get an honourable mention here as the build speed beats its competition hands down.

Static sites have their place in any project as long as you conform within the confines of the framework. If you have a feeling that your next project will at some point (or immediately) require some form of fanciful integration, dynamic is the way to go. Dynamic gives you unlimited possibilities and will always be the safer option, something static will never measure against.

The main strengths of static sites are that they’re secure and perform well in Lighthouse scoring potentially resulting favourably in search engines.

Avenue’s for Adding Content

The very cool thing is you have the ability to hook up to your content via two options:

  1. Markdown files
  2. Headless CMS

Markdown is such a pleasant and efficient way to write content. It’s all just plain text written with the help of a simplified notation that is then transformed into HTML. The crucial benefit of writing in markdown is its portability and clean output. If in the future I choose to jump to a different static framework, it’s just a copy and paste job.

A more acceptable client solution is to integrate with a Headless CMS where a more familiar Rich Text content editing and the storage of media is available to hand.

You can also create custom-built pages without having to worry about the data layer, for example, landing pages.

Final Thoughts

I love Gatsby and it’s been a very long time since I have been excited by a different approach to developing websites. I am very tempted to make the move as this framework is made for sites like mine, providing I can get solutions to areas in Gatsby where I currently lack knowledge, such as:

  • Making URL’s case-insensitive.
  • 301 redirects.
  • Serving different responsive images within the post content. I understand Gatsby does this at templating-level but cannot currently see a suitable approach for media housed inside content.

I’m sure the above points are achievable and as I have made quite swift progress on replicating my site in Gatsby, if all goes to plan, I could go the full hog. Meaning I don’t plan on serving content from any form of content-management system and cementing myself in Gatsby.

At one point I was planning on moving over to a headless CMS, such as Kontent or Prismic. That plan was swiftly scrapped when there didn’t seem to be an avenue of migrating my existing content unless a Business or Professional plan is purchased, which came to a high cost.

I will be documenting my progress in follow up posts. So watch this space!

Export Changes Between Two Git Commits In SourceTree

I should start off by saying how much I love TortoiseGit and it has always been the reliable source control medium, even though it's a bit of a nightmare to set up initially to work alongside Bitbucket. But due to a new development environment for an external project, I am kinda forced to use preinstalled Git programs:

  • SourceTree
  • Git Bash

I am more inclined to use a GUI when interacting with my repositories and use the command line when necessary.

One thing that has been missing from SourceTree ever since it was released, is the ability to export changes over multiple commits. I was hoping after many years this feature would be incorporated. Alas, no. After Googling around, I came across a StackOverflow post that showed the only way to export changes in Sourcetree based on multiple commits is by using a combination of the git archive and git diff commands:

git archive --output=archived_changes.zip HEAD $(git diff --diff-filter=ACMRTUXB --name-only hash1 hash2)

This can be run directly using the Terminal window for a repository in Sourcetree. The "hash1" and "hash2" values are the long 40 character length commit ID's.

The StackOverflow post has helped me in what I needed to achieve and as a learning process, I want to take things a step further in my post to understand what the archive command is actually doing for my own learning. So let's dissect the command into manageable chunks.

Part 1

git archive --output=archived_changes.zip HEAD

This creates the archive of the whole repository into a zip file. We can take things further in the next section to select the commits we need.

Part 2

git diff --diff-filter=ACMRTUXB

The git diff command shows changes in between commits. The filter option gives us more flexibility to select the files that are:

  • A Added
  • C Copied
  • D Deleted
  • M Modified
  • R Renamed
  • T have their type (mode) changed
  • U Unmerged
  • X Unknown
  • B have had their pairing Broken

Part 3

--name-only hash1 hash2

The second part of the git diff command uses the "name-only" option that just shows which files have changed over multiple commits based on the hash values entered.

Part 4

The git diff command needs to be wrapped around parentheses to act as a parameter for the git archive command.

Duplicate Content: The Impact of Canonical URLs

Being a web developer I am trying to become savvier when it comes to factoring additional SEO practices, which is generally considered (in my view) compulsory.

Ever since Google updated its Search Console (formally known as Webmaster Tools), it has opened my eyes to how my site is performing in greater detail, especially the pages Google deems as links not worthy for indexing. I started becoming more aware of this last August, when I wrote a post about attempting to reduce the number "Crawled - Currently not indexed" pages of my site. Through trial and error managed to find a way to reduce the excluded number of page links.

The area I have now become fixated on is the sheer number of pages being classed as "Duplicate without user-selected canonical". Google describes these pages as:

This page has duplicates, none of which is marked canonical. We think this page is not the canonical one. You should explicitly mark the canonical for this page. Inspecting this URL should show the Google-selected canonical URL.

In simplistic terms, Google has detected there are pages that can be accessed by different URL's with either same or similar content. In my case, this is the result of many years of unintentional neglect whilst migrating my site through different platforms and URL structures during the infancy of my online presence.

Google Search Console has marked around 240 links as duplicates due to the following two reasons:

  1. Pages can be accessed with or without a ".aspx" extension.
  2. Paginated content.

I was surprised to see paginated content was classed as duplicate content, as I was always under the impression that this would never be the case. After all, the listed content is different and I have ensured that the page titles are different for when content is filtered by either category or tag. However, if a site consists of duplicate or similar content, it is considered a negative in the eyes of a search engine.

Two weeks ago I added canonical tagging across my site, as I was intrigued to see if there would be any considerable change towards how Google crawls my site. Would it make my site easier to crawl and aid Google in understanding the page structure?

Surprising Outcome

I think I was quite naive about how my Search Console Coverage statistics would shift post cononicalisation. I was just expecting the number of pages classed as "Duplicate without user-selected canonical" to decrease, which was the case. I wasn't expecting anything more. On further investigation, it was interesting to see an overall positive change across all other coverage areas.

Here's the full breakdown:

  • Duplicate without user-selected canonical: Reduced by 10 pages
  • Crawled - Currently not indexed: Reduced by 65 pages
  • Crawl anomaly: Reduced by 20 pages
  • Valid : Increased by 60 pages

The change in figures may not look that impressive, but we have to remember this report is based only on two weeks after implementing canonical tags. All positives so far and I'm expecting to see further improvements over the coming weeks.

Conclusion

Canonical markup can often be overlooked, both in its implementation and importance when it comes to SEO. After all, I still see sites that don't use them as the emphasis is placed on other areas that require more effort to ensure it meets Google's search criteria, such as building for mobile, structured data and performance. So it's understandable why canonical tags could be missed.

If you are in a similar position to me, where you are adding canonical markup to an existing site, it's really important to spend the time to set the original source page URL correctly the first time as the incorrect implementation can lead to issues.

Even though my Search Console stats have improved, the jury's still out to whether this translates to better site visibility across search engines. But anything that helps search engines and visitors understand your content source can only be beneficial.

Switch Branches In TortoiseGit

My day to day version control system is Bitbucket. I never got on with their own Git GUI offering - Sourcetree. I always found using TortoiseGit much more intuitive to use and a flexible way to interact with my git repository. If anyone can change my opinion on this, I am all ears!

I work with large projects that are around a couple hundred megabytes in size and if I were to clone the same project over different branches, it can use up quite a bit of hard disk space. I like to quickly switch to my master branch after carrying out a merge for testing before carrying out a release.

Luckily TortoiseGit makes switching branches a cinch in just a few clicks:

  • Right-click in your repository
  • Go to TortoiseGit context menu
  • Click Switch/Checkout
  • Select the branch you wish to switch to and select "Overwrite working tree changes (force)"

TortoiseGit Switch Branches

Selecting the "Overwrite working tree changes (force)" tick box is important to ensure all files in your working directory is overwritten with the files directly from your branch. We do not want remnants of files left from the branch we had previously switched from kicking around.

Reducing The Number of 'Crawled - Currently not indexed' Pages

Every few weeks, I check over the health of my site through Google Search Console (aka Webmaster Tools) and Analytics to see how Google is indexing my site and look into potential issues that could affect the click-through rate.

Over the years the content of my site has grown steadily and as it stands it consists of 250 published blog posts. When you take into consideration other potential pages Google indexes - consisting of filter URL's based on grouping posts by tag or category, the number of links that my site consists is increased considerably. It's to the discretion of Google's search algorithm to whether it includes these links for indexing.

Last month, I decided to scrutinise the Search Console Index Coverage report in great detail just to see if there are any improvements I can make to alleviate some minor issues. What I wasn't expecting to see is the large volume of links marked as "Crawled - Currently not indexed".

Crawled Currently Not Indexed - 225 Pages

Wow! 225 affected pages! What does "Crawled - Currently not indexed" mean? According to Google:

The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.

Pretty self-explanatory but not much guidance on the process on how to lessen the number of links that aren't indexed. From my experience, the best place to start is to look at the list of links that are being excluded and to form a judgement based on the page content of these links. Unfortunately, there isn't an exact science. It's a process of trial and error.

Let's take a look at the links from my own 225 excluded pages:

Crawled Currently Not Indexed - Non Indexed Links

On initial look, I could see that the majority of the URL's consisted of links where users can filter posts by either category or tag. I could see nothing content-wise when inspecting these pages for a conclusive reason for index exclusion. However, what I did notice is that these links were automatically found by Google when the site gets spidered. The sitemap I submitted in the Search Console only list out blog posts and content pages.

This led me to believe a possible solution would be to create a separate sitemap that consisted purely of links for these categories and tags. I called it metasitemap.xml. Whenever I added a post, the sitemap's "lastmod" date would get updated, just like the pages listed in the default sitemap.

I created and submitted this new sitemap around mid-July and it wasn't until four days ago the improvement was reported from within the Search Console. The number of non-indexed pages was reduced to 58. That's a 74% reduction!

Crawled Currently Not Indexed - 58 Pages

Conclusion

As I stated above, there isn't an exact science for reducing the number of non-indexed pages as every site is different. Supplementing my site with an additional sitemap just happened to alleviate my issue. But that is not to say copying this approach won't help you. Just ensure you look into the list of excluded links for any patterns.

I still have some work to do and the next thing on my list is to implement canonical tags in all my pages since I have become aware I have duplicate content on different URL's - remnants to when I moved blogging platform.

If anyone has any other suggestions or solutions that worked for them, please leave a comment.

My First Butter CMS JavaScript Implementation

For one of my side projects, I was asked to use Butter CMS to allow for basic blog integration using JavaScript. I have never heard or used Butter CMS before and was intrigued to know more about the platform.

Butter CMS is another headless CMS variant that allows a developer to utilise API endpoints to push content to an application via an arrange of approaches. So nothing new here. Just like any headless CMS, the proof is in the pudding when it comes to the following factors:

  • Quality of features
  • Ease of integration
  • Price points
  • Quality of documentation

I haven't had a chance to properly look into what Butter CMS fully has to offer, but from what I have seen from working on the requirements for this side project I was pleasently surprised. Found it really easy to get setup with minimal amount of fuss! For this project I used Butter CMS's Blog Engine package, which does exactly what it says on the tin. All the fields you need for writing blog posts are already provided.

JavaScript Code

My JavaScipt implementation is pretty basic and provides the following functionality:

  • Outputs a list of posts consisting of title, date and summary text
  • Pagination
  • Output a single blog post

All key functionality is derived from the "ButterCMS" JavaScript file:

/*****************************************************/
/*                    Butter CMS                                 */
/*****************************************************/
var ButterCMS =
{
    ButterCmsObj: null,

    "Init": function () {
        // Initiate Butter CMS.
        this.ButterCmsObj = new ButterCmsBlogData();
        this.ButterCmsObj.Init();
    },
    "GetBlogPosts": function () {
        BEButterCMS.ButterCmsObj.GetBlogPosts(1);
    },
    "GetSinglePost": function (slug) {
        BEButterCMS.ButterCmsObj.GetSinglePost(slug);
    }
};

/*****************************************************/
/*                Butter CMS Data                         */
/*****************************************************/
function ButterCmsBlogData() {
    var apiKey = "<Enter API Key>",
        baseUrl = "/",
        butterInstance = null,
        $blogListingContainer = $("#posts"),
        $blogPostContainer = $("#post-individual"),
        pageSize = 10;

    // Initialise of the ButterCMSData object get the data.
    this.Init = function () {
        getCMSInstance();
    };

    // Returns a list of blog posts.
    this.GetBlogPosts = function (pageNo) {
        // The blog listing container needs to be cleared before any new markup is pushed.
        // For example when the next page of data is requested.
        $blogListingContainer.empty();

        // Request blog posts.
        butterInstance.post.list({ page: pageNo, page_size: pageSize }).then(function (resp) {
            var body = resp.data,
                blogPostData = {
                    posts: body.data,
                    next_page: body.meta.next_page,
                    previous_page: body.meta.previous_page
                };

            for (var i = 0; i < blogPostData.posts.length; i++) {
                $blogListingContainer.append(blogPostListItem(blogPostData.posts[i]));
            }

            //----------BEGIN: Pagination--------------//

            $blogListingContainer.append("<div>");

            if (blogPostData.previous_page) {
                $blogListingContainer.append("<a class=\"page-nav\" href=\"#\" data-pageno=" + blogPostData.previous_page + " href=\"\">Previous Page</a>");
            }

            if (blogPostData.next_page) {
                $blogListingContainer.append("<a class=\"page-nav\" href=\"#\" data-pageno=" + blogPostData.next_page + " href=\"\">Next Page</a>");
            }

            $blogListingContainer.append("</div>");

            paginationOnClick();

            //----------END: Pagination--------------//
        });
    };

    // Retrieves a single blog post based on the current URL of the page if a slug has not been provided.
    this.GetSinglePost = function (slug) {
        var currentPath = location.pathname,
            blogSlug = slug === null ? currentPath.match(/([^\/]*)\/*$/)[1] : slug;

        butterInstance.post.retrieve(blogSlug).then(function (resp) {
            var post = resp.data.data;

            $blogPostContainer.append(blogPost(post));
        });
    };

    // Renders the HTML markup and fields for a single post.
    function blogPost(post) {
        var html = "";

        html = "<article>";

        html += "<h1>" + post.title + "</h1>";
        html += "<div>" + blogPostDateFormat(post.created) + "</div>";
        html += "<div>" + post.body + "</div>";
        
        html += "</article>";

        return html;
    }

    // Renders the HTML markup and fields when listing out blog posts.
    function blogPostListItem(post) {
        var html = "";

        html = "<h2><a href=" + baseUrl + post.url + ">" + post.title + "</a></h2>";
        html += "<div>" + blogPostDateFormat(post.created) + "</div>";
        html += "<p>" + post.summary + "</p>";

        if (post.featured_image) {
            html += "<img src=" + post.featured_image + " />";
        }

        return html;
    }

    // Set click event for previous/next pagination buttons and reload the current data.
    function paginationOnClick() {
        $(".page-nav").on("click", function (e) {
            e.preventDefault();
            var pageNo = $(this).data("pageno"),
                butterCmsObj = new ButterCmsBlogData();

            butterCmsObj.Init();
            butterCmsObj.GetBlogPosts(pageNo);
        });
    }

    // Format the blog post date to dd/MM/yyyy HH:mm
    function blogPostDateFormat(date) {
        var dateObj = new Date(date);

        return [dateObj.getDate().padLeft(), (dateObj.getMonth() + 1).padLeft(), dateObj.getFullYear()].join('/') + ' ' + [dateObj.getHours().padLeft(), dateObj.getMinutes().padLeft()].join(':');
    }

    // Get instance of Butter CMS on initialise to make one call.
    function getCMSInstance() {
        butterInstance = new Butter(apiKey);
    }
}

// Set a prototype for padding numerical values.
Number.prototype.padLeft = function (base, chr) {
    var len = (String(base || 10).length - String(this).length) + 1;

    return len > 0 ? new Array(len).join(chr || '0') + this : this;
};

To get a list of blog posts:

// Initiate Butter CMS.
BEButterCMS.Init();

// Get all blog posts.
BEButterCMS.GetBlogPosts();

To get a single blog post, you will need to pass in the slug of the blog post via your own approach:

// Initiate Butter CMS.
BEButterCMS.Init();

// Get single blog post.
BEButterCMS.GetSinglePost(postSlug);

Check Your Website Headers People!

A website can tell the public a lot about you, from the things you want people to see and other things you probably would not. HTTP Headers can divulge things about your website that you wouldn't necessarily want to make public and its up to the individual to make a decision on what headers they're willing to expose. But what I would recommend is to at least analyse any site prior to moving to a production environment.

Why all of a sudden am I talking about questioning your website HTTP Headers?

It was only by chance when perusing StackOverflow I came across a question about securing HTTP headers, I was directed to a site called securityheaders.io. I immediately entered this very site for scanning, thinking it would fare quite well. But boy oh boy was I wrong!:

Security Headers (Before)

Based on this result, does this make my website vulnerable? To a certain extent yes. By default you're exposing some key information to potential hackers about how your website is built. For example, here is a simple list of HTTP Headers that could be returned from the server:

  • Web server
  • Framework version
  • Cache handling
  • Cross-site scripting access
  • Referrer policies

Now based on that list alone, what HTTP headers would you hide? From having my eyes opened by the report generated by securityheaders.io, as a minimum I would hide anything that shows what technology, framework and server platform I am using. If there happens to be an exploit on the very server or technology you are using, we don't want the whole world to know that especially if you happen to be hosting a high traffic website.

I decided to correct all the issues highlighted by securityheaders.io and spent additional time obfuscating some additional headers. Now I can proudly say I've passed. There is just one blemish against the report to do with the "Content-Security-Policy" header, which defines approved sources of content that the browser may load.

Security Headers (After)

I been tweaking around with the rules for this header and I'll be honest when I say it shafted the administration dashboard of my the content management system I use for my site - Kentico CMS. So before I reinstate the header, I need a little more time tweaking.

Another great site to use to analyse the security of your site (.NET sites only) is ASafaWeb, which scans for common configuration vulnerabilities.

Recommended Links