Posts written in 2022.

  • Published on
    10 min read

    Year In Review - 2022


    I have been struggling to write my traditional end-of-year post as I am filled with a sense of melancholy after catching the dreaded COVID and now using the Christmas holidays to get some respite.

    The Christmas spirit of 2022 has passed me by and as a result, been experiencing "Scrooge Syndrome". This is quite possibly due to COVID ruining my plans or maybe because I am just getting too old for the festive holidays and my child-like sense of wonder is getting sucked out of me with each year that passes.

    I am hoping my recovery into the new year will be reignited and where normal service will be resumed.

    Anyway, without further ado, let the end-of-year post begin.

    2022 In Words/Phrases

    New site, Tailwind CSS, Wedding (part 2), India, Cotswolds, Tanking UK Economy, Netlify, Hubspot, Lego Ford Mustang, Stocks and Shares, Investments, Pixel 7 Pro, Writing, Spousal Visa, COVID Positive, Lots of writing, Anime


    A few months into the year, organic search stats dipped considerably, which led me to think that some of the content I was posting was not as relevant as I hoped. At this point, I thought I'd hit the glass ceiling... something that I've come to terms with since noticing the year-on-year positive increase in visitor and organic searches.

    The dip continued for 3-4 months, but carried on as normal and was more so determined to stay the course. Out of all the years of blogging, this was the year I felt most inspired and truly at an ease with writing and (thankfully!) managed to recoup the loss of traction. It was from quarter three of the year onwards that there was a substantial uptake in daily site visitors.

    Writing blog posts about Google Apps Scripts, Active Campaign and Azure were the top performing pages and helped drive traffic.

    I would say the most rewarding takeaway from this year is noticing increased readership based on recently written posts compared to older ones written from previous years.

    2021/2022 Comparison:

    • Users: +14.86%
    • Page Views: +14.49%
    • New Users: +15.33%
    • Bounce Rate: -0.51%
    • Search Console Total Clicks: +97.18%
    • Search Console Impressions: +109%
    • Search Console Page Position: -2.4%

    Record-breaking Number of Posts

    For the first time in my years of blogging, I have managed to publish the highest number of posts within a year - 29 (including this one)! I cannot say what the reason for this could be as this year has been very manic.

    Next year, I think I will slow down a bit and write a smaller number of posts that are longer in content.

    It was satisfying to see increased viewership on most recently written posts over 2022 and 2021.

    Caught The COVID

    I caught COVID in mid-December for what I believed was the second time after encountering it at the start of 2020 (back when tests weren't even a thing). Based on how I felt during the first time was completely different second-time round, which made me question if my most recent encounter was in fact the first. I felt dreadful for a longer period when compared to my experiences back in 2020.

    I am coasting through the Christmas holidays in recovery mode and trying to enjoy the simpler things life has to offer... when you feel like doing absolutely nothing.

    New Website!

    At the end of last year, I was learning the Tailwind CSS framework to see if it could be used to build a template for this website. It didn't take long for me to get the hang of building pages due to the wide variety of pre-built components available online for me to tinker with and easy-to-follow documentation.

    If it wasn't for the Tailwind CSS framework, I don't think I would have had the patience nor the skill to build my website template using native HTML and CSS alone. I was surprised at the quick turnaround time. By July the site refresh as well as some updates under the hood was released.

    I am just thankful this finally happened! After many "year in review" posts where I've repeatedly stated the plan to redevelop my site (and failed to do so!), this was the year where I managed to make this happen. The words I've written have never looked so good!

    There are still some minor tweaks I would like to carry out next year, such as a dark mode feature and re-integrate Algolia search.

    Further details on my site rebuild can be read here: New Website Built with Tailwind CSS and GatsbyJS.

    Stocks and Investments

    Ever since getting married, making my money work harder has been at the forefront of my mind. I read that if you're not making money whilst you're sleeping or do not have an additional revenue stream, then you'll never truly have financial freedom. Stocks and investments were something I initially got exposure to through my use of the Plum savings app I started using during 2020, where it would automatically make investments on my behalf based on a list of predefined sector portfolios.

    This year I decided to break away from the Plum app-managed investments and start building my own stock portfolio. It took me a little while to get the confidence to do this and my first few trades were small in value just to test the waters.

    So far it has proven to be an enjoyable learning curve that has bore fruit. I plan to write more in the future on how my investments are progressing which will be housed under the new Finance category.

    As it stands, my investment strategy has changed slightly since writing my very first post about the subject. My investments consist of:

    • ETF Index Funds
    • Renewable Energy
    • Tech companies

    I feel like there is no time like the present to make as much money as I can whilst I have little dependencies, especially as I have come to the investment game late in my life. Sometimes I kick myself for not having the knowledge that I have now when I was 25. I truly believe I could have been better off financially. I have to be realistic about the expected return of investment and have a 5-10 year plan on seeing where this takes me.

    14th January 2023 will mark exactly a year since I took control of my investment portfolio. I will be checking my MWRR (money-weighted rate of return) to tell me how much my portfolio has increased or decreased in value since making my first deposit.

    Lego Ford Mustang Build

    I’ve finally found time during the year to complete this Lego model. All it took was a couple of hours over a handful of weekends. Being someone who hasn’t built any Lego models since I was a child, I felt a little rusty and I have to admit, due to the sheer number of intricate pieces there were countless times when I lost my patience during the build process.

    Nevertheless, you can’t help but appreciate all the features the Lego creators have managed to cram into the model. The end product is a work of art and wouldn’t mind building another if the right model were to come along.

    Overall, a satisfactory build and a worthy addition to my office.

    Anime TV Shows

    I think getting COVID has changed my viewing habits and was surprised to find that I quite enjoyed watching a handful of Anime shows on Netflix to keep me entertained as I didn't have the focus or patience during the recovery period to watch anything that required too much thinking.

    So far I have watched:

    • Thermae Romae Novae: The adventures of a passionate Roman bath architect who starts randomly moving back and forth through time to present-day Japan, where he finds inspiration in the bathing innovations he finds. Watching this makes me want to travel to Japan and relax in one of their old-fashioned Onsen.
    • Howl's Moving Castle: The film in the most simplistic terms is about a teenager who works at a hat shop and gets transformed into an old woman by a witch who curses her. Even if you're not entirely invested in the story, you'll be entertained by the beautiful animation that is truly a feast for the eyes.
    • Kotaro Lives Alone: A lonely 4-year-old boy moves into an apartment building on his own and is befriended by his neighbours. Each episode touches on serious issues of abandonment, friendship, and life with a heartfelt and somewhat comedic effect.

    I am not sure if Anime based shows will continue to garner my attention in the future, but I very much appreciate what I have watched so far.

    Goals for 2023

    I think for 2023, a more realistic set of goals is in order. I'll plan on focusing more on life-orientated goals over planned achievements within my career.

    Workout and General Exercising

    I really need to get back into regular exercise as I have been quite lax in this department over the year. I used to schedule a workout a minimum of three times a week, which has now reduced to zero. I will start slowly by conditioning my body with light cardio initially and work my way to proper workouts.

    Get Back Into Reading

    I would like to allocate time to read more as I have a stagnant bookshelf that has not changed for quite some time. Being someone technical who lives and breathes the industry every hour of the day, I feel it would be a healthy change to diversify my focus.

    I miss getting lost in a novel and transporting myself into another world. I know the very book I want to sink my teeth into to get me back into reading: The Thursday Murder Club.

    There are also some finance-related books on my reading list I can delve into to assist further in my investment knowledge in between fictional storytelling.

    Less WhatsApp'ing, More Phone Calling

    I am someone who prefers to just send a quick message via WhatsApp (to the dismay of my sister) rather than grabbing the phone, placing it to my ear and starting an actual conversation.

    Sending a message is fine in day-to-day life, but should it be used so regularly that subconsciously it could cause a degree of separation to the very person who you are communicating with?

    The answer is: No.

    Most of the time on a daily basis I am overwhelmed with what life throws at me. The multitude of things that occur both within work and personal life is a juggling act. I am not the best juggler. As a result, having a proper conversation with a friend or family member has always taken a back seat.

    I think out of all my goals for next year, this one is going to be the toughest. As they say: The first step to change is knowing you have a problem... And I have a problem.

    DIY Project: Hardwire A Front and Rear Dashcam

    I am quite protective of my car. It’s my pride and joy that I like to keep in showroom condition. Unfortunately, it is not in showroom condition anymore ever since my car was damaged whilst I was in a part of London I generally hate driving in. Luckily, my car managed to come out relatively unscathed with some battle scars that require repair.

    From this point, it seemed natural I should purchase a dashcam for an added layer of protection and peace of mind. Come 2023, I plan on carrying out a hardwire installation where the dashcam will be powered directly by the car's battery. I’ve never done this before and am quite excited about the installation process.

    Be The Person I Want To Be

    I've come to the realisation that in some cases I am not the person I want to be and started inheriting what others want me to be. This comes from a single point of trying to please others and now is the time to have a factory reset without fear of reprisal whatever the outcome may be.


    My FujiFilm X100F has been gathering dust since my last proper holiday to the Maldives back in 2019 and since then I've been utilising my phone for taking pictures for every occasion. Now that I own a Pixel 7 Pro that has a fantastic camera array producing truly beautiful pictures, makes getting the FujiFilm camera out is even harder.

    I miss holding a traditional camera and next year I plan on taking it with me on even the smallest outing.

    Desk Upgrade

    This isn't a priority. But with each day I work from home, the more I feel that a standing desk would be a worthy investment for my posture and general health. In addition, I noticed the majority of standing desks result in less clutter and more organisation due to cable management and add-on accessories to make better use of space.

    I have my eye on a Secretlab MAGNUS PRO. It's a desk that combines both form and function!

    Final Thoughts

    This tiny piece of the internet gives me so much joy to share thoughts and impart the knowledge some may deem useful. Thank you all for reading and see you in the new year.

  • Published on
    4 min read

    Lazy-Loading Disqus In Gatsby JS

    Disqus is a popular commenting system used on many websites and seems to be the go-to for bloggers who want to add some form of engagement from their readers. I’ve used Disqus ever since I had a blog and never experienced any problems even throughout the different iterations this site has gone through over the years.

    No complaints here, for a free service that encompasses both function and form.

    Even since I redeveloped my site in July, I’ve attempted to make page performance of paramount importance and put pages on a strict diet by removing unnecessary third-party plugins. Even though I was fully aware that Disqus adds bloat, I just assumed it's a necessary evil. However, I felt I really had to do something after reading a blog post by Victor Zhou on the reasons why he decided to move away from Disqus. The reasons are pretty damning.

    Disqus increases both the page load requests and weight. I can confirm these findings myself. On average, Disqus was adding around 1.6MB-2MB of additional bloat to my blog pages. This is the case even when a blog post had no comments. As a result, the following Google Lighthouse scores took a little beating:

    • Performance: 82/100
    • Best Practices: 75/100

    Pretty bad when you take into consideration that most of the pages on my site consist of text and nothing overly complex.

    Migrating to another commenting provider as Victor Zhou had done could be an option. There are other options I've noticed my fellow bloggers use, such as:

    Each one of these options has its pros and cons whether that might be from a pricing or feature standpoint. I decided to remain with Disqus for the moment as migrating comments is another task I don't currently have time to perform. I would be content in keeping Disqus if I could find a way to negate the page bloat by lazy-loading Disqus on demand.

    I've seen other Disqus users going down the lazy-loading approach within their builds, but couldn't find anything specifically for a Gatsby JS site. Luckily, the solution is quite straightforward and requires minimal code changes.


    The GatsbyJS gatsby-plugin-disqus plugin makes it easy to integrate Disqus functionality. All that needs to be done is to add the following component to the page:

    let disqusConfig = {
            url: `${site.siteMetadata.siteUrl+postUrl}`,
            identifier: postId,
            title: postTitle,
    <Disqus config={disqusConfig} />

    The only way to add lazyload functionality to this plugin is by controlling when it should be rendered. I decided to render Disqus through a simple button click.

    import { useStaticQuery, graphql } from "gatsby";
    import React, { useState } from 'react';
    import { Disqus } from 'gatsby-plugin-disqus';
    const DisqusComponent = ({ postId, postTitle, postUrl }) => {
        const [disqusIsVisible, setDisqusVisibility] = useState(false);
        // Set Disqus visibility state on click.
        const showCommentsClick = event => {
        let disqusConfig = {
            url: postUrl,
            identifier: postId,
            title: postTitle,
        return (
            {!disqusIsVisible && (
                <button onClick={showCommentsClick}>Load Comments</button>
            {disqusIsVisible && (
              <Disqus config={disqusConfig} />

    The code above is an excerpt from a React component I place within a blog post page. React state is used to set the visibility via the showCommentsClick() onClick function. When this event is fired, two things happen:

    1. The "Load Comments" button disappears.
    2. Disqus comments are rendered.

    We can confirm the lazyloading capability works by looking at the "Network" tab in Chrome Developer Tools. You probably won't notice the page speed improvement in delaying the load of Disqus, but within the "Network" tab you'll see a lower number of requests.

    Disqus Lazy-Loading Demo


    Changing the way Disqus loads on a webpage may come across as a little pedantic and an insignificant performance improvement. I believe where performance savings can be made, it should be done. Since I have rolled out the Disqus update across all pages based on the approach discussed here, the Google Lighthouse scores have increased to:

    • Performance: 95/100
    • Best Practices: 92/100

    For the first time, my website has a Google Lighthouse score ranging between 95-100 across all testing criteria.

    Conclusion - Part 2

    As I neared the end of writing this post, I just happened to come across another Gatsby Disqus plugin - disqus-react, that another blogger Janosh wrote about. This is the officially supported React plugin written by the Disqus team and contains lazy-load functionality:

    All Disqus components are lazy-loaded, meaning they won’t negatively impact the load times of your posts (unless someone is specifically following a link to one of the comments in which case it takes a little longer than on a purely static site).

    Could this really be true? Was this post written in vain?

    Janosh had stated he is using this plugin on his website and out of curiosity, I decided to download the disqus-react git repo and run the code examples locally to see how Disqus gets loaded onto the page.

    I ran both Google Lighthouse and checked Chrome's "Network" tab and after running numerous tests, no lazy-loading functionality was present. I could see Disqus JS files and avatar images being served on page load. I even bulked up the blog post body content to ensure Disqus was not anywhere in view - maybe the component will only load when in view? This made no difference.

    Unless anyone else can provide any further insight, I will be sticking to my current implementation.

  • I have multiple email addresses spanning across a handful of domains. For the majority of them, email accounts need to be set up for each one of these domains. After a while, the costs start to add up, especially when some of these email accounts receive only a few emails. In addition, daily checking of emails across separate accounts can be somewhat a little painful.

    Normally, I would use a feature in my personal Gmail account that allows me to not only check emails from other email accounts but also respond to received emails in one place. But there are a couple of limitations, such as there is the number of external email addresses that can be added and the frequency to which these accounts are checked for new messages.

    Enter Email Aliases

    What would most suit my needs is an email alias service that will provide a single admin area to create all the email addresses for any of my registered domains. Aliases allow you to send and receive emails to an inbox of your choosing. So I could store all my emails within my Gmail account and make better use of the storage allowance. is the service which does just that. I’ve been trialling the Premium tier for a week, allowing me to add aliases to multiple custom domains and (the handiest feature) reply to emails sent to the alias. Setup is relatively swift consisting of some domain-level DNS updates and creating a mailbox for emails to be forwarded to.

    By connecting my Gmail account as a mailbox for SimpleLogin to forward emails, sending and receiving emails feels really native. I now have a central area to check emails within my Google account whilst also adding an additional layer of security.

    I am always wary of sharing my Gmail email address as Google houses a lot of my private information - Photos, Email and Drive documents/files. I prefer to err on the side of caution when it comes to anything relating to my Google account.

    For my everyday use, I decided to set up the following aliases against a newly registered personal domain based on the different types of websites I use:

    • shopping@
    • technical@
    • social@
    • random@

    If for any reason any one of my email aliases gets compromised due to a data breach or excessive spam, I can quite easily remove the offending alias. also provides the option to generate temporary email aliases if needed - useful for times when you need to sign-up just to get some free promotion without disclosing one of your core email addresses.

    Going through this process will hopefully give me a chance to finally phase out my very old "" related ISP email accounts - something I've been meaning to do for a very long time. I find it quite amazing my old ISP email accounts are still in service, which were originally set up when my Dad first connected our family household to the sweet sweet taste of mega-fast broadband back in the early 2000s.

    Other Options

    At one point, I considered hosting my mail server using my ever-useful Synology NAS to save costs in purchasing further email hosting. This idea was quashed relatively quickly as I just don't trust the uptime of my ISP or my home networking setup - though might be a suitable option for those who do.


    Adopting email aliases has allowed me to rethink and re-organise how I want my emails to be used on a day-to-day basis. When you take into consideration the overall cost, security and privacy benefits, it's the email service I never knew I needed until now.

  • Published on
    3 min read

    Pixel Watch and My Smartwatch Epiphany

    I received a Pixel Watch as part of a free reward for pre-ordering the new Pixel 7 phone. I thought this watch would be a fitting replacement for my Garmin Vivoactive 3 that I've owned for over three years. Even though my trusty Garmin is still going strong, you can't help but want to try out a shiny new upgrade that possesses better native integration that works alongside the features of my new phone. Or so I thought...

    I think I've always been hesitant about getting a dedicated smartwatch and preferred fitness watches as health/workout tracking is of greater importance to me. The smartwatches I've seen don't even compare from a fitness tracking standpoint when compared to a Garmin. Nevertheless, I had an opportune moment to try out a smartwatch that just happened to be the Pixel Watch.

    From the moment I received the Pixel Watch, I decided to carry out a two week evaluation period as to whether I could be coerced into moving away from my Garmin. - Pixel Watch on the left wrist and Garmin on the right.

    As much as I loved wearing and using the Pixel Watch over the two weeks, I found myself getting frustrated with it. This isn’t specifically due to shortcomings of the Pixel Watch as it’s not too bad for a first-generation device of its kind, but smartwatches in general.

    I always thought I needed a smartwatch that was tightly integrated with my phone ever since seeing both my wife and sister using features such as making/receiving calls, remotely controlling the camera and receiving app notifications on their Apple Watches. Both of them made it look so fun and useful.

    The Epiphany

    As great as smartwatch features are, I came to a sudden epiphany that possessing one of Pixel/Apple watch calibre wasn't what it was cracked up to be and soon found myself overloaded with constant information and notifications.

    Smartphones have already given us a convenient way to practically do everything at our fingertips. I don’t feel the need for a smartwatch to offer further extension of these features, especially when it requires a phone nearby to enable some of the key features. I could simply switch all these features off to limit connectivity to my phone (completely defeating the point of owning a smartwatch).

    My phone is never too far away and the only time when it isn’t at arm's reach is when I require some quiet or focus time. The main benefit of a phone is there can be an easy physical disconnect, unlike a smartphone that is always attached to one's wrist.

    The point I am trying to make is that due to the world we live in and the economic climate, our brains are already working overtime and processing more than we care to notice. Giving one's mind a rest and being present is no longer a consideration. It's become the societal norm that we should always be online and accessible. Is it so unnatural to step away from phone or watch-based interruptions?

    Over the years through self-discipline, I have managed to reduce the urge of checking my phone whenever a notification pops up and using a smartwatch has slightly hindered what I've tried so hard not to do.


    I will be sticking with fitness watches, Garmin in particular, as it provides a good balance of the smart features I need without the distractions. The battery life isn't too bad either and retains its charge for around three to seven days, compared to an Apple or Pixel Watch that lasts for around a day at a push (as along as features aren't used in excess).

    Understandably, there will be some who do not share the viewpoint shared in this post and prefer the convenience a smartwatch provides. The honest truth is not everyone needs a smartwatch and the requirement is entirely dependent on the type of functions you feel will be of most use to you.

  • ActiveCampaign is a comprehensive marketing tool that helps businesses automate their email marketing strategies and create targeted campaigns. If the tracking code is used, visitors can be tracked to understand how they interact with your content and curate targeted email campaigns for them.

    I recently registered for a free account to test the waters in converting readers of my blog posts into subscribers to create a list of contacts that I could use to send emails to when I have published new content. For this website, I thought I'd create a Contact Form that will serve the purpose of allowing a user to submit a query as well as being added to a mailing list in the process.

    ActiveCampaign provides all the tools to easily create a form providing multiple integration options, such as:

    • Simple JavaScript embed
    • Full embed with generated HTML and CSS
    • Link to form
    • WordPress
    • Facebook

    As great as these out-of-the-box options are, we have no control over how our form should look or function within our website. For my use, the Contact Form should utilise custom markup, styling, validation and submission process.

    Step 1: Creating A Form

    The first step is to create our form within ActiveCampaign using the form builder. This can be found by navigating to Website > Forms section. When the "Create a form" button is clicked, a popup will appear that will give us options on the type of form we would like to create. Select "Inline Form" and the contact list you would like the form to send the registrations to.

    My form is built up based on the following fields:

    • Full Name (Standard Field)
    • Email
    • Description (Account Field)

    ActiveCampaign Form Builder

    As we will be creating a custom-built form later, we don't need to worry about anything from a copy perspective, such as the heading, field labels or placeholder text.

    Next, we need to click on the "Integrate" button on the top right and then the "Save and exit" button. We are skipping the form integration step as this is of no use to us.

    Step 2: Key Areas of An ActiveCampaign Form

    There are two key areas of an ActiveCampaign form we will need to acquire for our custom form to function:

    1. Post URL
    2. Form Fields

    To get this information, we need to view the HTML code of our ActiveCampaign Contact form. This can be done by going back to the forms section (Website > Forms section) and selecting "Preview", which will open up our form in a new window to view.

    ActiveCampaign Form Preview

    In the preview window, open up your browser Web Inspector and inspect the form markup. Web Inspector has to be used rather than the conventional "View Page Source" as the form is rendered client-side.

    ActiveCampaign Form Code

    Post URL

    The <form /> tag contains a POST action (highlighted in red) that is in the following format: This URL will be needed for our custom-built form to allow us to send values to ActiveCampaign.

    Form Fields

    An ActiveCampaign form consists of hidden fields (highlighted in green) and traditional input fields (highlighted in purple) based on the structure of the form we created. We need to take note of the attribute names and values when we make requests from our custom form.

    Step 3: Build Custom Form

    Now that we have the key building blocks for what an ActiveCampaign form uses, we can get to the good part and delve straight into the code.

    import React, { useState } from 'react';
    import { useForm } from "react-hook-form";
    export function App(props) {
      const { register, handleSubmit, formState: { errors } } = useForm();
        const [state, setState] = useState({
            isSubmitted: false,
            isError: false
        const onSubmit = (data) => {
            const formData = new FormData();
            // Hidden field key/values.
            formData.append("u", "4");
            formData.append("f", "4");
            formData.append("s", "s");
            formData.append("c", "0");
            formData.append("m", "0");
            formData.append("act", "sub");
            formData.append("v", "2");
            formData.append("or", "c0c3bf12af7fa3ad55cceb047972db9");
            // Form field key/values.
            formData.append("fullname", data.fullname);
            formData.append("ca[1][v]", data.contactmessage);
            // Pass FormData values into a POST request to ActiveCampaign.
            // Mark form submission successful, otherwise return error state. 
            fetch('', {
                method: 'POST',
                body: formData,
                mode: 'no-cors',
            .then(response => {
                    isSubmitted: true,
            .catch(err => {
                    isError: true,
      return (
            {!state.isSubmitted ? 
                <form onSubmit={handleSubmit(onSubmit)}>
                                    <label htmlFor="fullname">Name</label>
                                    <input id="fullname" name="fullname" placeholder="Type your name" className={errors.fullname ? "c-form__textbox error" : "c-form__textbox"} {...register("fullname", { required: true })} />
                                    {errors.fullname && <div className="validation--error"><p>Please enter your name</p></div>}
                                    <label htmlFor="email">Email</label>
                                    <input id="email" name="email" placeholder="Email" className={errors.contactemail ? "c-form__textbox error" : "c-form__textbox"} {...register("email", { required: true, pattern: /^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,4}$/ })} />
                                    { && <div className="validation--error"><p>Please enter a valid email</p></div>}
                                    <label htmlFor="contactmessage">Message</label>
                                    <textarea id="contactmessage" name="contactmessage" placeholder="Message" className={errors.contactmessage ? "c-form__textarea error" : "c-form__textarea"} {...register("contactmessage", { required: true })}></textarea>
                                    {errors.contactmessage && <div className="validation--error"><p>Please enter your message</p></div>}
                                <input type="submit" value="Submit" />
                    {state.isError ? <p>Unfortunately, your submission could not be sent. Please try again later.</p> : null}    
                : <p>Thank you for your message. I will be in touch shortly.</p>}

    The form uses FormData to store all hidden field and text input values. You'll notice the exact same naming conventions are used as we have seen when viewing the source code of the ActiveCampaign form.

    All fields need to be filled in and a package called react-hook-form is used to perform validation and output error messages for any field that is left empty. If an error is encountered on form submission, an error message will be displayed, otherwise, the form is replaced with a success message.


    ActiveCampaign Custom Form Demo

    We will see Obi-Wan Kenobi's entry added to ActiveCampaign's Contact list for our test submission.

    ActiveCampaign Contact List


    In this post, we have demonstrated how a form is created within ActiveCampaign and understand the key areas of what the created form consists of in order to develop a custom implementation using GatsbyJS or React.

    Now all I need to do is work on the front-end HTML markup and add this functionality to my own Contact page.

  • Published on
    2 min read

    Running A Gatsby Site Locally Using Netlify CLI

    As I have been delving deeper into adding more functionality to my Gatsby site within the Netlify eco-system, it only seemed natural that I should install the CLI to make development faster and easier to test builds locally before releasing them to my Netlify site. There have been times when I have added a new feature to my site to only find it breaks during the build process eating up those precious build minutes.

    One thing that I found a miss from the Netlify CLI documentation were the steps to running a site locally, in my case a Gatsby JS site. The first time I ran the netlify dev command, I was greeted by an empty browser window served under http://localhost:8888.

    There were a couple of steps I was missing to test my site within a locally run Netlify setup.

    1) Build Site

    The Gatsby site needs to be compiled so all HTML, CSS and JavaScript files are generated as physical files on your machine. When the following command is run, all files will be generated within the /public folder of your project:

    gatsby build

    The build command creates a version of your site with production-ready optimisations by packaging up your site’s configurations, data and creating all the static HTML pages. Unlike the serve command, you cannot view the site once the build has been completed. Only files are generated, which is exactly what we need.

    2) Run Netlify Dev Command From Build Directory

    Now that we have a built version of the site generated locally within the /public folder, we need to run the Netlify Dev command against this directory by running the following:

    netlify dev -dir public

    As you can see, the dir flag is used to run our site from where the compiled site files reside. I originally had a misconception in thinking the Netlify Dev command would build my Gatsby site as well, when in fact it does not.


    If you have a site hosted by Netlify, using the CLI should is highly recommended as it provides you that extra step in ensuring any updates made can be tested prior to deployment. My site uses Netlify features such as redirects and plugins, which I now can test locally instead of going down the previously inefficient route of:

    1. Deploying changes to Netlify.
    2. Waiting for the build process to complete.
    3. Test changes within the preview site.
    4. If all is good, publish the site. If not, resolve error and deploy again.

    This endless cycle of development hell is now avoided thanks to the safety net the Netlify CLI provides.

    Further Reading

  • Amazon's return process is second to none. It is one of the very few large e-commerce sites that gets the process right and makes the process painless for its customers. Along with their customer support, I've never had an issue in returning an item if the quality of the item was not up to standard or received damaged. But in such cases just because you can return an item should you?

    Ever since I read articles on Amazon destroying millions of items of unsold stock just to be sent to landfill, I've been more mindful as to when I should return an item.

    If I receive an item that slightly damaged, I normally opt for a discount rather than sending it back. I've done this in the past by either contacting the third-party seller or Amazon customer service using their online chat tool. They are very forthcoming in offering a discount as long as you can provide proof that the product you received is damaged or not in an acceptable condition.

    This approach has worked for me whenever I felt it was required. Not surprising when you take into consideration the cost to the seller for pickup and disposal (or renewing the product).

    I normally opt for this approach for items that are still fit for purpose, where the damage can easily be hidden and the longevity is not compromised. Of course, the level of damage that is deemed acceptable depends on the product and your own personal view.

    Why not try this alternative the next time you receive a damaged product that is still able to serve its purpose? If enough of us are able to take this route, it can not only benefit the environment but also reduce waste.

  • I've been delving further into the world of Google App Scripts and finding it my go-to when having to carry out any form of data manipulation. I don't think I've ever needed to develop a custom C# based import tool to handle the sanitisation and restructuring of data ever since learning the Google App Script approach.

    In this post, I will be discussing how to search for a value within a Google Sheet and return all columns within the row the searched value resides. As an example, let's take a few columns from a dataset of ISO-3166 Country and Region codes as provided by this CSV file and place them in a Google Sheet named "Country Data".

    The "Country Data" sheet should have the following structure:

    name alpha-2 alpha-3 country-code
    Australia AU AUS 036
    Austria AT AUT 040
    Azerbaijan AZ AZE 031
    United Kingdom of Great Britain and Northern Ireland GB GBR 826
    United States of America US USA 840

    App Script 1: Returning A Single Row Value

    Our script will be retrieving the two-letter country code by the country name - in this case "Australia". To do this, the following will be carried out:

    1. Perform a search on the "Country Data" sheet using the findAll() function.
    2. The getRow() function will return single row containing all country information.
    3. A combination of getLastColumn() and getRange() functions will output values from the row.
    function run() {
      var twoLetterIsoCode = getCountryTwoLetterIsoCode("Australia"); 
    function getCountryTwoLetterIsoCode(countryName) {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
      var countryDataSheet = activeSheet.getSheetByName('Country Data');
      // Find text within sheet.
      var textSearch = countryDataSheet.createTextFinder(countryName).findAll();
      if (textSearch.length > 0) {
        // Get single row from search result.
        var row = textSearch[0].getRow();    
        // Get the last column so we can use for the row range.
        var rowLastColumn = countryDataSheet.getLastColumn();
        // Get all values for the row.
        var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues();
        return rowValues[0][1]; // Two-letter ISO code from the second column.
      else {
        return "";

    When the script is run, the twoLetterIsoCode variable will contain the two-letter ISO code: "AU".

    App Script 2: Returning Multiple Row Matches

    If we had a dataset that contained multiple matches based on a search term, the script from the first example can be modified using the same fundamental functions. In this case, all we need to do is use a for loop and pass all row values to an array.

    The getCountryTwoLetterIsoCode() will look something like this:

    function getCountryTwoLetterIsoCode(countryName) {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
      var countryDataSheet = activeSheet.getSheetByName('Country Data');
      // Find text within sheet.
      var textSearch = countryDataSheet.createTextFinder(countryName).findAll();
      // Array to store all matched rows.
      var searchRows = [];
      if (textSearch.length > 0) {
        // Loop through matches.
        for (var i=0; i < textSearch.length; i++) {
          var row = textSearch[i].getRow();  
          // Get the last column so we can use for the row range.
          var rowLastColumn = countryDataSheet.getLastColumn();
          // Get all values for the row.
          var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues(); 
      return searchRows;

    The searchRows array will contain a collection of matched rows as well as the column data. To carry out a similar output as shown in the first App Script example - the two-letter country code, the function can be called in the following way:

    // Get first match.
    var matchedCountryData = getCountryTwoLetterIsoCode("Australia")[0];
    // Get the second column value (alpha-2).
    var twoLetterIsoCode = matchedCountryData[0][1];


    Both examples have demonstrated different ways of returning row values of a search term. The two key lines of code that allows us to do this are:

    // Get the last column so we can use for the row range.
    var rowLastColumn = countryDataSheet.getLastColumn();
    // Get all values for the row.
    var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues();
  • Published on
    3 min read

    Websites and The Environment

    When building any application, the last thing on any developer's mind is how a build will impact the environment. After all, an application relies on some form of hosting infrastructure - servers, databases, firewalls, switches, routers, cooling systems, etc. The efficiency of how all these pieces of hardware combined are powered to host your application never comes into question.

    We are fast becoming aware, more than ever before, that what we do day-to-day has an impact on the environment and are more inclined to take appropriate steps in changing our behaviour to reduce our carbon footprint. However, our behaviour remains unchanged when it comes to our online habits.

    Every time a website is visited, a request is made to the server to serve content to the user. This in itself utilises a nominal amount of power for a single user. But when you take hundreds or even thousands of visitors into consideration, the amount of power required builds up exponentially causing more carbon dioxide to be emitted. Of course, this all depends on how efficiently you build your website. For example, reducing unnecessary calls to the database and effective use of caching.

    From a digital standpoint, energy is perceived as an infinite commodity with little regard for its carbon footprint.

    Interestingly, Microsoft experimented with developing a self-sufficient underwater shipping container-size data centre on the seafloor near Scotland’s Orkney Islands in a two-year trial that ended in 2020. It proved that underwater data centres are feasible, environmentally and economically practical. The consistently cool temperature of the sea allows data centres to be energy-efficient without tapping into freshwater resources. An impressive feat of engineering.

    Microsoft Underwater Data Center near Scotland’s Orkney Islands

    Analysing Site Emissions

    I thought it would be a fun exercise to see how my website fairs from an environmental perspective. It's probably not the most ideal time to carry this out as I've only just recently rebuilt my site. But here we go...

    There are two websites I am using to analyse how my website fairs from an environment perspective:

    These tools are separate entities and use their own algorithms to determine how environmentally friendly a website and even though they both use datasets provided by The Green Web Foundation, it is expected to see differences in the numbers both these tools report.

    Website Carbon Calculator

    Website Carbon Calculator states my website is 95% cleaner than other web pages tested, produces 0.05kg of CO2 whenever someone visits a page and (most importantly) running on sustainable energy. All good!

    Website Carbon Calculator Results

    The full report can be seen here.

    Digital Beacon

    Digital Beacon allows me to delve further into more granular stats on how the size of specific page elements has an effect on CO2 emissions on my website, such as JavaScript, images and third-party assets.

    Digital Beacon Results

    This tool has rated my website as "amazing" when it comes to its carbon footprint. The page breakdown report highlights there is still room for improvement in the Script and Image area.

    The full report can be seen here.

    Examples of Low Carbon Websites showcases low-carbon web design and development. I am hoping, in time, more websites will be submitted and added to their list as great examples that sustainable development doesn't mean you're limited to how you develop websites.

    I am proud to have this very website added to the list. It's all the more reason to focus on ensuring my website is climate friendly on an ongoing basis. - submission

    Final Thoughts

    There are well over 1 billion websites in the world. Just imagine for a moment if even 0.01% of these websites took pre-emptive steps on an ongoing basis to ensure their pages are loading efficiently, this would make quite the difference in combatting CO2 emissions. I'm not stating that this alone will single-handedly combat climate change, but it'll be a start.

    Not all hosting companies will have the investment to make their infrastructure environmentally friendly and trial alternatives on a similar scale as Microsoft has done. We as developers need to change our mindset on how we build our applications and have the environmental implications at the forefront of our minds. It's all too easy to develop things out of thin air and see results. The change will have to start at code level.

    Further Reading

  • Published on
    2 min read

    Lightweight jQuery Quiz Style Countdown Timer

    It's not often you happen to stumble across a piece of code written around nine or ten years ago with fond memories. For me, it's a jQuery Countdown timer I wrote to be used in a quiz for a Sky project called The British at my current workplace - Syndicut.

    It is only now, all these years later I've decided to share the code for old times sake (after a little sprucing up).

    This countdown timer was originally used in quiz questions where the user had a set time limit to correctly answer a set of multiple-choice questions as quickly as possible. The longer they took to respond, the fewer points they received for that question.

    If the selected answer was correct, the countdown stopped and the number of points earned and time taken to select the answer was displayed.

    Demonstration of the countdown timer in action:

    Quiz Countdown Demo

    Of course, the version used in the project was a lot more polished.



    const Timer = {
        ClockPaused: false,
        TimerStart: 10,
        StartTime: null,
        TimeRemaining: 0,
        EndTime: null,
        HtmlContainer: null,
        "Start": function(htmlCountdown) {
            Timer.StartTime = (new Date()).getTime() - 0;
            Timer.EndTime = (new Date()).getTime() + Timer.TimerStart * 1000;
            Timer.HtmlContainer = $(htmlCountdown);
            // Ensure any added styles have been reset.
            // Ensure message is cleared for when the countdown may have been reset.
            // Show/hide the appropriate buttons.
        "DisplayCountdown": function() {
            if (Timer.ClockPaused) {
                return true;
            Timer.TimeRemaining = (Timer.EndTime - (new Date()).getTime()) / 1000;
            if (Timer.TimeRemaining < 0) {
                Timer.TimeRemaining = 0;
            //Display countdown value in page.
            //Calculate percentage to append different text colours.
            const remainingPercent = Timer.TimeRemaining / Timer.TimerStart * 100;
            if (remainingPercent < 15) {
                Timer.HtmlContainer.css("color", "Red");
            } else if (remainingPercent < 51) {
                Timer.HtmlContainer.css("color", "Orange");
            if (Timer.TimeRemaining > 0 && !Timer.ClockPaused) {
                setTimeout(function() {
                }, 100);
            else if (!Timer.ClockPaused) {
        "Stop" : function() {
            Timer.ClockPaused = true;
            const timeTaken = Timer.TimerStart - Timer.TimeRemaining;
            $("#message").html("Your time: " + timeTaken.toFixed(2));
            // Show/hide the appropriate buttons.        
        "TimesUp" : function() {
            $("#message").html("Times up!");        
    $(document).ready(function () {
        $("#btn-start-timer").click(function () {
        $("#btn-reset-timer").click(function () {
        	Timer.ClockPaused = false;
        $("#btn-stop-timer").click(function () {


    <div id="container">
      <div id="timer">
      <br />
      <div id="message"></div>
      <br />  
      <button id="btn-start-timer">Start Countdown</button>
      <button id="btn-stop-timer" style="display:none">Stop Countdown</button>
      <button id="btn-reset-timer" style="display:none">Reset Countdown</button>

    Final Thoughts

    When looking over this code after all these years with fresh eyes, the jQuery library is no longer a fixed requirement. This could just as easily be re-written in vanilla JavaScript. But if I did this, it'll be to the detriment of nostalgia.

    A demonstration can be seen on my jsFiddle account.