Blog

Blogging on programming and life in general.

  • ActiveCampaign is a comprehensive marketing tool that helps businesses automate their email marketing strategies and create targeted campaigns. If the tracking code is used, visitors can be tracked to understand how they interact with your content and curate targeted email campaigns for them.

    I recently registered for a free account to test the waters in converting readers of my blog posts into subscribers to create a list of contacts that I could use to send emails to when I have published new content. For this website, I thought I'd create a Contact Form that will serve the purpose of allowing a user to submit a query as well as being added to a mailing list in the process.

    ActiveCampaign provides all the tools to easily create a form providing multiple integration options, such as:

    • Simple JavaScript embed
    • Full embed with generated HTML and CSS
    • Link to form
    • WordPress
    • Facebook

    As great as these out-of-the-box options are, we have no control over how our form should look or function within our website. For my use, the Contact Form should utilise custom markup, styling, validation and submission process.

    Step 1: Creating A Form

    The first step is to create our form within ActiveCampaign using the form builder. This can be found by navigating to Website > Forms section. When the "Create a form" button is clicked, a popup will appear that will give us options on the type of form we would like to create. Select "Inline Form" and the contact list you would like the form to send the registrations to.

    My form is built up based on the following fields:

    • Full Name (Standard Field)
    • Email
    • Description (Account Field)

    ActiveCampaign Form Builder

    As we will be creating a custom-built form later, we don't need to worry about anything from a copy perspective, such as the heading, field labels or placeholder text.

    Next, we need to click on the "Integrate" button on the top right and then the "Save and exit" button. We are skipping the form integration step as this is of no use to us.

    Step 2: Key Areas of An ActiveCampaign Form

    There are two key areas of an ActiveCampaign form we will need to acquire for our custom form to function:

    1. Post URL
    2. Form Fields

    To get this information, we need to view the HTML code of our ActiveCampaign Contact form. This can be done by going back to the forms section (Website > Forms section) and selecting "Preview", which will open up our form in a new window to view.

    ActiveCampaign Form Preview

    In the preview window, open up your browser Web Inspector and inspect the form markup. Web Inspector has to be used rather than the conventional "View Page Source" as the form is rendered client-side.

    ActiveCampaign Form Code

    Post URL

    The <form /> tag contains a POST action (highlighted in red) that is in the following format: https://myaccount.activehosted.com/proc.php. This URL will be needed for our custom-built form to allow us to send values to ActiveCampaign.

    Form Fields

    An ActiveCampaign form consists of hidden fields (highlighted in green) and traditional input fields (highlighted in purple) based on the structure of the form we created. We need to take note of the attribute names and values when we make requests from our custom form.

    Step 3: Build Custom Form

    Now that we have the key building blocks for what an ActiveCampaign form uses, we can get to the good part and delve straight into the code.

    import React, { useState } from 'react';
    import { useForm } from "react-hook-form";
    
    export function App(props) {
      const { register, handleSubmit, formState: { errors } } = useForm();
        const [state, setState] = useState({
            isSubmitted: false,
            isError: false
          });    
    
        const onSubmit = (data) => {
            const formData = new FormData();
    
            // Hidden field key/values.
            formData.append("u", "4");
            formData.append("f", "4");
            formData.append("s", "s");
            formData.append("c", "0");
            formData.append("m", "0");
            formData.append("act", "sub");
            formData.append("v", "2");
            formData.append("or", "c0c3bf12af7fa3ad55cceb047972db9");
    
            // Form field key/values.
            formData.append("fullname", data.fullname);
            formData.append("email", data.email);
            formData.append("ca[1][v]", data.contactmessage);
            
            // Pass FormData values into a POST request to ActiveCampaign.
            // Mark form submission successful, otherwise return error state. 
            fetch('https://myaccount.activehosted.com/proc.php', {
                method: 'POST',
                body: formData,
                mode: 'no-cors',
            })
            .then(response => {
                setState({
                    isSubmitted: true,
                });
            })
            .catch(err => {
                setState({
                    isError: true,
                });
            });
        }
    
      return (
        <div>
            {!state.isSubmitted ? 
                <form onSubmit={handleSubmit(onSubmit)}>
                    <fieldset>
                        <legend>Contact</legend>
                        <div>
                            <div>
                                <div>
                                    <label htmlFor="fullname">Name</label>
                                    <input id="fullname" name="fullname" placeholder="Type your name" className={errors.fullname ? "c-form__textbox error" : "c-form__textbox"} {...register("fullname", { required: true })} />
                                    {errors.fullname && <div className="validation--error"><p>Please enter your name</p></div>}
                                </div>
                            </div>
                            <div>
                                <div>
                                    <label htmlFor="email">Email</label>
                                    <input id="email" name="email" placeholder="Email" className={errors.contactemail ? "c-form__textbox error" : "c-form__textbox"} {...register("email", { required: true, pattern: /^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,4}$/ })} />
                                    {errors.email && <div className="validation--error"><p>Please enter a valid email</p></div>}
                                </div>
                            </div>
                            <div>
                                <div>
                                    <label htmlFor="contactmessage">Message</label>
                                    <textarea id="contactmessage" name="contactmessage" placeholder="Message" className={errors.contactmessage ? "c-form__textarea error" : "c-form__textarea"} {...register("contactmessage", { required: true })}></textarea>
                                    {errors.contactmessage && <div className="validation--error"><p>Please enter your message</p></div>}
                                </div>
                            </div>
                            <div>
                                <input type="submit" value="Submit" />
                            </div>
                        </div>
                    </fieldset>
                    {state.isError ? <p>Unfortunately, your submission could not be sent. Please try again later.</p> : null}    
                </form>
                : <p>Thank you for your message. I will be in touch shortly.</p>}
        </div>
      );
    }
    

    The form uses FormData to store all hidden field and text input values. You'll notice the exact same naming conventions are used as we have seen when viewing the source code of the ActiveCampaign form.

    All fields need to be filled in and a package called react-hook-form is used to perform validation and output error messages for any field that is left empty. If an error is encountered on form submission, an error message will be displayed, otherwise, the form is replaced with a success message.

    Demo

    ActiveCampaign Custom Form Demo

    We will see Obi-Wan Kenobi's entry added to ActiveCampaign's Contact list for our test submission.

    ActiveCampaign Contact List

    Conclusion

    In this post, we have demonstrated how a form is created within ActiveCampaign and understand the key areas of what the created form consists of in order to develop a custom implementation using GatsbyJS or React.

    Now all I need to do is work on the front-end HTML markup and add this functionality to my own Contact page.

  • As I have been delving deeper into adding more functionality to my Gatsby site within the Netlify eco-system, it only seemed natural that I should install the CLI to make development faster and easier to test builds locally before releasing them to my Netlify site. There have been times when I have added a new feature to my site to only find it breaks during the build process eating up those precious build minutes.

    One thing that I found a miss from the Netlify CLI documentation were the steps to running a site locally, in my case a Gatsby JS site. The first time I ran the netlify dev command, I was greeted by an empty browser window served under http://localhost:8888.

    There were a couple of steps I was missing to test my site within a locally run Netlify setup.

    1) Build Site

    The Gatsby site needs to be compiled so all HTML, CSS and JavaScript files are generated as physical files on your machine. When the following command is run, all files will be generated within the /public folder of your project:

    gatsby build
    

    The build command creates a version of your site with production-ready optimisations by packaging up your site’s configurations, data and creating all the static HTML pages. Unlike the serve command, you cannot view the site once the build has been completed. Only files are generated, which is exactly what we need.

    2) Run Netlify Dev Command From Build Directory

    Now that we have a built version of the site generated locally within the /public folder, we need to run the Netlify Dev command against this directory by running the following:

    netlify dev -dir public
    

    As you can see, the dir flag is used to run our site from where the compiled site files reside. I originally had a misconception in thinking the Netlify Dev command would build my Gatsby site as well, when in fact it does not.

    Conclusion

    If you have a site hosted by Netlify, using the CLI should is highly recommended as it provides you that extra step in ensuring any updates made can be tested prior to deployment. My site uses Netlify features such as redirects and plugins, which I now can test locally instead of going down the previously inefficient route of:

    1. Deploying changes to Netlify.
    2. Waiting for the build process to complete.
    3. Test changes within the preview site.
    4. If all is good, publish the site. If not, resolve error and deploy again.

    This endless cycle of development hell is now avoided thanks to the safety net the Netlify CLI provides.

    Further Reading

  • Amazon's return process is second to none. It is one of the very few large e-commerce sites that gets the process right and makes the process painless for its customers. Along with their customer support, I've never had an issue in returning an item if the quality of the item was not up to standard or received damaged. But in such cases just because you can return an item should you?

    Ever since I read articles on Amazon destroying millions of items of unsold stock just to be sent to landfill, I've been more mindful as to when I should return an item.

    If I receive an item that slightly damaged, I normally opt for a discount rather than sending it back. I've done this in the past by either contacting the third-party seller or Amazon customer service using their online chat tool. They are very forthcoming in offering a discount as long as you can provide proof that the product you received is damaged or not in an acceptable condition.

    This approach has worked for me whenever I felt it was required. Not surprising when you take into consideration the cost to the seller for pickup and disposal (or renewing the product).

    I normally opt for this approach for items that are still fit for purpose, where the damage can easily be hidden and the longevity is not compromised. Of course, the level of damage that is deemed acceptable depends on the product and your own personal view.

    Why not try this alternative the next time you receive a damaged product that is still able to serve its purpose? If enough of us are able to take this route, it can not only benefit the environment but also reduce waste.

  • I've been delving further into the world of Google App Scripts and finding it my go-to when having to carry out any form of data manipulation. I don't think I've ever needed to develop a custom C# based import tool to handle the sanitisation and restructuring of data ever since learning the Google App Script approach.

    In this post, I will be discussing how to search for a value within a Google Sheet and return all columns within the row the searched value resides. As an example, let's take a few columns from a dataset of ISO-3166 Country and Region codes as provided by this CSV file and place them in a Google Sheet named "Country Data".

    The "Country Data" sheet should have the following structure:

    name alpha-2 alpha-3 country-code
    Australia AU AUS 036
    Austria AT AUT 040
    Azerbaijan AZ AZE 031
    United Kingdom of Great Britain and Northern Ireland GB GBR 826
    United States of America US USA 840

    App Script 1: Returning A Single Row Value

    Our script will be retrieving the two-letter country code by the country name - in this case "Australia". To do this, the following will be carried out:

    1. Perform a search on the "Country Data" sheet using the findAll() function.
    2. The getRow() function will return single row containing all country information.
    3. A combination of getLastColumn() and getRange() functions will output values from the row.
    function run() {
      var twoLetterIsoCode = getCountryTwoLetterIsoCode("Australia"); 
    }
    
    function getCountryTwoLetterIsoCode(countryName) {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
      var countryDataSheet = activeSheet.getSheetByName('Country Data');
    
      // Find text within sheet.
      var textSearch = countryDataSheet.createTextFinder(countryName).findAll();
    
      if (textSearch.length > 0) {
        // Get single row from search result.
        var row = textSearch[0].getRow();    
        // Get the last column so we can use for the row range.
        var rowLastColumn = countryDataSheet.getLastColumn();
        // Get all values for the row.
        var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues();
    
        return rowValues[0][1]; // Two-letter ISO code from the second column.
      }
      else {
        return "";
      }
    }
    

    When the script is run, the twoLetterIsoCode variable will contain the two-letter ISO code: "AU".

    App Script 2: Returning Multiple Row Matches

    If we had a dataset that contained multiple matches based on a search term, the script from the first example can be modified using the same fundamental functions. In this case, all we need to do is use a for loop and pass all row values to an array.

    The getCountryTwoLetterIsoCode() will look something like this:

    function getCountryTwoLetterIsoCode(countryName) {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
      var countryDataSheet = activeSheet.getSheetByName('Country Data');
    
      // Find text within sheet.
      var textSearch = countryDataSheet.createTextFinder(countryName).findAll();
    
      // Array to store all matched rows.
      var searchRows = [];
    
      if (textSearch.length > 0) {
        // Loop through matches.
        for (var i=0; i < textSearch.length; i++) {
          var row = textSearch[i].getRow();  
          // Get the last column so we can use for the row range.
          var rowLastColumn = countryDataSheet.getLastColumn();
          // Get all values for the row.
          var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues(); 
    
          searchRows.push(rowValues);
        }
      }
    
      return searchRows;
    }
    

    The searchRows array will contain a collection of matched rows as well as the column data. To carry out a similar output as shown in the first App Script example - the two-letter country code, the function can be called in the following way:

    // Get first match.
    var matchedCountryData = getCountryTwoLetterIsoCode("Australia")[0];
    
    // Get the second column value (alpha-2).
    var twoLetterIsoCode = matchedCountryData[0][1];
    

    Conclusion

    Both examples have demonstrated different ways of returning row values of a search term. The two key lines of code that allows us to do this are:

    // Get the last column so we can use for the row range.
    var rowLastColumn = countryDataSheet.getLastColumn();
    
    // Get all values for the row.
    var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues();
    
  • When building any application, the last thing on any developer's mind is how a build will impact the environment. After all, an application relies on some form of hosting infrastructure - servers, databases, firewalls, switches, routers, cooling systems, etc. The efficiency of how all these pieces of hardware combined are powered to host your application never comes into question.

    We are fast becoming aware, more than ever before, that what we do day-to-day has an impact on the environment and are more inclined to take appropriate steps in changing our behaviour to reduce our carbon footprint. However, our behaviour remains unchanged when it comes to our online habits.

    Every time a website is visited, a request is made to the server to serve content to the user. This in itself utilises a nominal amount of power for a single user. But when you take hundreds or even thousands of visitors into consideration, the amount of power required builds up exponentially causing more carbon dioxide to be emitted. Of course, this all depends on how efficiently you build your website. For example, reducing unnecessary calls to the database and effective use of caching.

    From a digital standpoint, energy is perceived as an infinite commodity with little regard for its carbon footprint.

    Interestingly, Microsoft experimented with developing a self-sufficient underwater shipping container-size data centre on the seafloor near Scotland’s Orkney Islands in a two-year trial that ended in 2020. It proved that underwater data centres are feasible, environmentally and economically practical. The consistently cool temperature of the sea allows data centres to be energy-efficient without tapping into freshwater resources. An impressive feat of engineering.

    Microsoft Underwater Data Center near Scotland’s Orkney Islands

    Analysing Site Emissions

    I thought it would be a fun exercise to see how my website fairs from an environmental perspective. It's probably not the most ideal time to carry this out as I've only just recently rebuilt my site. But here we go...

    There are two websites I am using to analyse how my website fairs from an environment perspective:

    These tools are separate entities and use their own algorithms to determine how environmentally friendly a website and even though they both use datasets provided by The Green Web Foundation, it is expected to see differences in the numbers both these tools report.

    Website Carbon Calculator

    Website Carbon Calculator states my website is 95% cleaner than other web pages tested, produces 0.05kg of CO2 whenever someone visits a page and (most importantly) running on sustainable energy. All good!

    Website Carbon Calculator Results

    The full report can be seen here.

    Digital Beacon

    Digital Beacon allows me to delve further into more granular stats on how the size of specific page elements has an effect on CO2 emissions on my website, such as JavaScript, images and third-party assets.

    Digital Beacon Results

    This tool has rated my website as "amazing" when it comes to its carbon footprint. The page breakdown report highlights there is still room for improvement in the Script and Image area.

    The full report can be seen here.

    Examples of Low Carbon Websites

    Lowwwcarbon.com showcases low-carbon web design and development. I am hoping, in time, more websites will be submitted and added to their list as great examples that sustainable development doesn't mean you're limited to how you develop websites.

    I am proud to have this very website added to the list. It's all the more reason to focus on ensuring my website is climate friendly on an ongoing basis.

    Lowwwcarbon.com - www.surinderbhomra.com submission

    Final Thoughts

    There are well over 1 billion websites in the world. Just imagine for a moment if even 0.01% of these websites took pre-emptive steps on an ongoing basis to ensure their pages are loading efficiently, this would make quite the difference in combatting CO2 emissions. I'm not stating that this alone will single-handedly combat climate change, but it'll be a start.

    Not all hosting companies will have the investment to make their infrastructure environmentally friendly and trial alternatives on a similar scale as Microsoft has done. We as developers need to change our mindset on how we build our applications and have the environmental implications at the forefront of our minds. It's all too easy to develop things out of thin air and see results. The change will have to start at code level.

    Further Reading

  • It's not often you happen to stumble across a piece of code written around nine or ten years ago with fond memories. For me, it's a jQuery Countdown timer I wrote to be used in a quiz for a Sky project called The British at my current workplace - Syndicut.

    It is only now, all these years later I've decided to share the code for old times sake (after a little sprucing up).

    This countdown timer was originally used in quiz questions where the user had a set time limit to correctly answer a set of multiple-choice questions as quickly as possible. The longer they took to respond, the fewer points they received for that question.

    If the selected answer was correct, the countdown stopped and the number of points earned and time taken to select the answer was displayed.

    Demonstration of the countdown timer in action:

    Quiz Countdown Demo

    Of course, the version used in the project was a lot more polished.

    Code

    JavaScript

    const Timer = {
        ClockPaused: false,
        TimerStart: 10,
        StartTime: null,
        TimeRemaining: 0,
        EndTime: null,
        HtmlContainer: null,
    
        "Start": function(htmlCountdown) {
            Timer.StartTime = (new Date()).getTime() - 0;
            Timer.EndTime = (new Date()).getTime() + Timer.TimerStart * 1000;
    
            Timer.HtmlContainer = $(htmlCountdown);
    				
            // Ensure any added styles have been reset.
            Timer.HtmlContainer.removeAttr("style");
    
            Timer.DisplayCountdown();
            
            // Ensure message is cleared for when the countdown may have been reset.
            $("#message").html("");     
            
            // Show/hide the appropriate buttons.
            $("#btn-stop-timer").show();
            $("#btn-start-timer").hide();
            $("#btn-reset-timer").hide();
        },
        "DisplayCountdown": function() {
            if (Timer.ClockPaused) {
                return true;
            }
    
            Timer.TimeRemaining = (Timer.EndTime - (new Date()).getTime()) / 1000;
    
            if (Timer.TimeRemaining < 0) {
                Timer.TimeRemaining = 0;
            }
    
            //Display countdown value in page.
            Timer.HtmlContainer.html(Timer.TimeRemaining.toFixed(2));
    
            //Calculate percentage to append different text colours.
            const remainingPercent = Timer.TimeRemaining / Timer.TimerStart * 100;
            if (remainingPercent < 15) {
                Timer.HtmlContainer.css("color", "Red");
            } else if (remainingPercent < 51) {
                Timer.HtmlContainer.css("color", "Orange");
            }
    
            if (Timer.TimeRemaining > 0 && !Timer.ClockPaused) {
                setTimeout(function() {
                    Timer.DisplayCountdown();
                }, 100);
            } 
            else if (!Timer.ClockPaused) {
                Timer.TimesUp();
            }
        },
        "Stop" : function() {
            Timer.ClockPaused = true;
            
            const timeTaken = Timer.TimerStart - Timer.TimeRemaining;
            
            $("#message").html("Your time: " + timeTaken.toFixed(2));
            
            // Show/hide the appropriate buttons.        
            $("#btn-stop-timer").hide();
            $("#btn-reset-timer").show();
        },
        "TimesUp" : function() {
            $("#btn-stop-timer").hide();
            $("#btn-reset-timer").show();
            
            $("#message").html("Times up!");        
        }
    };
    
    $(document).ready(function () {
        $("#btn-start-timer").click(function () {
        	Timer.Start("#timer");
        });
        
        $("#btn-reset-timer").click(function () {
        	Timer.ClockPaused = false;
        	Timer.Start("#timer");
        });
        
        $("#btn-stop-timer").click(function () {
            Timer.Stop();
        });
    });
    

    HTML

    <div id="container">
      <div id="timer">
        -.--
      </div>
      <br />
      <div id="message"></div>
      <br />  
      <button id="btn-start-timer">Start Countdown</button>
      <button id="btn-stop-timer" style="display:none">Stop Countdown</button>
      <button id="btn-reset-timer" style="display:none">Reset Countdown</button>
    </div>
    

    Final Thoughts

    When looking over this code after all these years with fresh eyes, the jQuery library is no longer a fixed requirement. This could just as easily be re-written in vanilla JavaScript. But if I did this, it'll be to the detriment of nostalgia.

    A demonstration can be seen on my jsFiddle account.

  • If you haven't noticed (and I hope you have), back in June I finally released an update to my website to look more pleasing to the eye. This has been a long time coming after being on the back-burner for a few years.

    Embarrassingly, I’ve always stated in my many year in reviews that I planned on redeveloping this site over the next coming year, but never came to fruition. This is partly down to time and deciding to make content a priority. If I’m honest, it’s mostly down to lacking the skills and patience in carrying out the front-end development work.

    Thankfully, I managed to knuckle down and decided to become acquainted and learnt enough about HTML and CSS to get the site where it currently stands, with the help of Tailwind CSS and an open-source base template to act as a good starting point for a novice front-end developer.

    Tailwind CSS

    Very early on, I knew the only hope I had to give this site a new look was to use a front-end framework like Tailwind CSS, requiring a minimal learning curve to produce quick results. It’s definitely not a front-end framework to be sniffed at as more than 260000 developers have used it for their design system. So it’s a framework that is here to stay - a worthwhile investment to learn.

    Tailwind CSS is predominantly a CSS framework consisting of predefined classes to build websites directly within the markup without having to write a single line of custom CSS.

    As you’re styling directly within the markup, at first glance it can be overwhelming, especially where multiple classes need to be declared on a single HTML block. A vast difference when compared to the cleanliness of builds carried out by the very skilful team from where I work.

    It’s a small trade-off in an otherwise solid framework that gives substantial benefits in productivity. Primarily because Tailwind CSS classes aren’t very specific and gives a high level of customisability without you having to concoct CSS styles.

    Even though there are many utility classes to get acquainted with, once you have an understanding of the core concepts, front-end builds become less of an uphill battle. Through rebuilding my site, I managed to quite quickly get familiarity with creating different layouts based on viewport size and modifying margins and padding.

    I found it to be a very modular and component-driven framework, helping avoid repetition. There are UI kits on the market that give good examples of the power of Tailwind CSS that you can use to help speed up development:

    Using Tailwind CSS took away my fear of front-end development without having to think about Bootstrap, BEM, SASS mix-ins, custom utility classes, purge processing, etc.

    Base Template

    I gave myself a 3-week target (not full-time) to get the new site released and this couldn't have been done without getting a head start from a base theme. I found an open-source template built by Timothy Lin on Tailwind Awesome website that suited my key requirements:

    • Clean
    • Simple
    • Elegant
    • Maintainable
    • Easily customisable

    Another developer by the name of Leo, developed another variation of this already great template where I felt it met my requirements down to a tee.

    Even though the template code-base used was developed in Next.js, this did not matter as I could easily migrate the Tailwind markup into my Gatsby JS project. Getting Tailwind set up initially for Gatsby took a little tinkering to get right and to ensure the generated CSS footprint was kept relatively small.

    As you can see from the new site build, I was able to make further modifications to suit my requirements. This in itself is a testament to the original template build quality and the power of Tailwind CSS.

    Improvements

    As well as changing the look of my site, I thought it would be an opportune time to make a few other small enhancements.

    Google Ads

    Removing Google Ads had been on the forefront of my mind ever since I moved over to Netlify to host my website. Previously, it was a way to contribute to the yearly hosting cost. Now, this is no longer of any relevance (as I'm on the free Netlify free hosting plan), especially when weighing the importance of a meagre monetary return over improving the overall website look and load times of the site.

    In its place, I have a Buy Me A Coffee profile for those who would like to support the content I write.

    Updated Version of Gatsby JS

    It seemed natural to upgrade the version of Gatsby JS from version 2 to 4 during the reworking of my site to keep up-to-date with the latest changes and remove any deprecated code.

    Upgrading from version 2 to 4 took a little longer than I'd hoped as other elements required updating such as Node and NPM packages. This resulted in a lot of breaking changes within my code-base that I had to rectify.

    The process was arduous but worth doing as I found site builds in Netlify reduced significantly.

    Gatsby Build Caching

    I briefly spoke about improved Netlify build times (above) due to efficiencies in code changes relating to upgrading to Gatsby 4. There is one more quiver to my bow to aid further build efficiencies and that is by installing the netlify-plugin-gatsby-cache plugin within Netlify - one-click install.

    I highly recommend everyone who has a Gatsby site install this plugin as it instantly reduces build times. For a website like my own that houses over 300 posts the build minutes do start to add up.

    Features Yet To Be Implemented

    Even though the new version of my site is live, there are features I still plan on implementing.

    Algolia Site Search

    As part of getting a new version of my site released in such a short period, I had to focus on the core areas and everything else was secondary. One of the features that didn’t make the cut was the site search using Algolia.

    I do plan on reinstating the site search feature at some point as I found it helpful for me to search through my older posts and surprisingly (based on the stats) visitors to the site also made use of it.

    Short-Form Content

    I like the idea of posting smaller pieces of content that doesn't have to result in very lengthy written blog posts. Not sure what I will call this new section. There are only two names that come to mind: "Short-form" or "Bytesize". It could consist of the following types of content:

    • Small, concise code snippets.
    • Links to content I found useful online that could be useful in certain technical use-cases.
    • Book recommendations.
    • Quotes.
    • Thoughts on news articles - John Gruber style!

    At one point, I wrote blog posts I categorised as Quick Tips, till this date consists of a mere four blog posts that I never added to. I think the naming of this category wasn't quite right.

    I see this section functioning in a similar fashion to Marco Heine's Today I Learned.

    My Bookmarks

    I like the idea of having single page with a bunch of links to useful sites I keep going back to. It could be sites that you have never come across before, making all the more reason to share these links.

    Closing Thoughts

    I normally find a full-site rebuild quite trying at times. This time was different and there were two reasons for this.

    Firstly, I've already built the site in Gatsby JS and involved minimal code changes, even when taking into consideration the changes needed to update to version 4. Secondly, using Tailwind CSS as a front-end framework was a very rewarding experience especially when page builds come to fruition in such a quick turnaround.

    I hope you find the new design is more aesthetically pleasing and makes reading through blog posts a more enjoyable experience.

  • A couple of weeks ago, I encountered an issue where for no reason one of my regularly running Azure Functions stopped. One of the first things I do when such an event arises is to manually run the function through the "Code + Test" interface to give me an idea of the severity of the issue and whether I can replicate the error reported within Application Insights.

    It's only then I noticed the following message was displayed above my list of functions that were never present before:

    Your app is currently in read-only mode because you are running from a package file

    Your app is currently in read only mode because you are running from a package file. To make any changes update the content in your zip file and WEBSITE_RUN_FROM_PACKAGE app setting.

    I was unable to even start the Azure Function manually. I can't be too sure whether the above message was the reason for this. Just seemed too coincidental that everything came to a halt based on a message I've never seen before. However, in one of Christos Matskas blog posts, he writes:

    Once our Function code is deployed, we can navigate to the Azure portal and test that everything’s working as expected. In the Azure Function app, we select the function that we just deployed and choose the Code+Test tab. Our Function is read-only as the code was pre-compiled but we can test it by selecting the Test/Run menu option and hitting the Run button.

    So an Azure Function in read-only mode should have no trouble running as normal.

    One of the main suggestions was to remove the WEBSITE_RUN_FROM_PACKAGE configuration setting or update its value to "0". This had no effect in getting the function back to working form.

    Suspected Cause

    I believe there is a bigger underlying issue at play here as having an Azure Function in read-only mode should not affect the overall running of the function itself. I suspect it's from when I published some updates using the latest version of Visual Studio (2022), where there might have been a conflict in publish profile settings set by an older version. All previous releases were made by the 2019 edition.

    Solution

    It would seem my solution differs from the similar recommended solutions I've seen online. In my case, all I had to do is recreate my publish profile in Visual Studio and unselect the "Run from package file (recommended)" checkbox.

    Visual Studio 2022 - Azure Function Publish Profile

    When a republish was carried out using the updated publish profile, the Azure Function functioned as normal.

    Conclusion

    This post primarily demonstrates how a new publish profile may need to be created if using a newer version of Visual Studio, in addition, an approach to remove the "read only" state from an Azure Function.

    However, I have to highlight that it's recommended to use "run from package" as it provides several benefits, such as:

    • Reduces the risk of file copy locking issues.
    • Can be deployed to a production app (with restart).
    • You can be certain of the files that are running in your app.
    • May reduce cold-start times.

    I do plan on looking into this further as any time I attempted to "run from package" none of my Azure Functions ran. Next course of action is to create a fresh Azure Function instance within the Azure Portal to see if that makes any difference.

  • Published on
    -
    4 min read

    The Future of Death In A Digital Age

    I've started watching a very interesting Netflix series called "The Future Of". A documentary series exploring how new developments in technology and other innovations will change our lives in the future.

    The episode that caught my attention broached the subject of "Life After Death" and spoke about how through holograms and voice cloning, the approach to how we come to terms with death and say goodbye to our loved ones on passing will change. Some ideas presented were inspired, others not so much.

    Strangely enough, the points raised resonated with me. Probably because I've put great importance on preserving memories through photos and ensuring I will always have these throughout my lifetime to look back on and act as a time capsule about my family for generations after me.

    Life and Our Social Digital Footprint

    After death, we not only leave behind our loved ones but also a big trail of data! If you could put a number on the amount of data we collect (including collected about us unknowingly) over a lifetime, what would it amount to?

    Research conducted in 2016 by Northeastern University, estimated that 1.7 MB of data is created every second per person. This would equate to 146880 MB per day! Over a lifetime… I dare not calculate. It is these digital exhaust fumes we all produce that will remain long after we're gone.

    The biggest chunk of this is taken up through the use of social media, sucking up artefacts about us like a vacuum. To put this into some perspective, social media collects so much data, they can’t remember all the ways they surveil us.

    It’s crazy to think the amount of data we’re openly willing to share to the likes of Facebook, Instagram and TikTok. I sometimes think about what my next generation will think when it is my turn to move on to the "next life" and see the things I've posted (or lack of!) on these platforms. Will this be considered a correct representation of me as a person?

    Death and Ownership of Data

    What happens to all this data after death? - This very important point was brought up in the episode.

    Thankfully, not all is doom and gloom when it comes to having control of a deceased online profile. The majority of online platforms allow the profile to be removed or memorialised with the consent of an immediate family member. For this to be actioned, a death certificate of the deceased and proof you are the next of kin will need to be provided.

    Unfortunately, this will not always be the status quo. There will always be a question on just how many people are aware this is an option and can carry this out as a task. Regardless of whether a profile is claimed or not, there will be on a daily basis an accumulation of deceased profiles.

    Social media platforms are getting wise to this impending scenario, especially when you have the likes of Facebook and their future battle with the dead. According to academics from the University of Oxford, within the next 50 years, the dead could outnumber the living. Other online platforms could find themselves in a similar fate.

    They will need a strategy in place to make their online presence financially viable to investors for when the ad revenue dries up and will have to be creative in how this data can be of use.

    In the episode, it highlighted your data will most likely belong to the social media platform to do with as they please for their monetary gain. For example, in the future, the avatar of a deceased member of your family could be used to advertise a product that in real life would never support. - Something you could only imagine from the writers of Black Mirror.

    What about self-managed data like a personal website, photos and files stored on a computer or NAS? Unfortunately, this is where things get a little tricky as someone technical will have to be entrusted to keep things going. This has put into question all my efforts to create a time capsule storing all important photos of family past and present, painstakingly organised securely within my NAS. What will become of this?

    Conclusion

    I feel this post doesn’t lead to any conclusion and probably raised more questions than answers - thoughts on a subject that creates a great impression on me.

    It's an irrefutable fact that technology is changing the way we die, and for the living, changing the way they deal with death. Future generations will find it easier and more accessible than ever to know who their ancestors were - if they felt inclined to find out.

    Could it be considered narcissistic to invest so much effort in ensuring our digital footprint is handled in the manner we would like the next generation to see after we die? A form of digital immortality. Depends on the length you want to go to ensure how you're remembered.

    I end this post paraphrasing an apt quote from Professor Charles Isbell (featured in the episode):

    If you perceive immortality as nothing more than your great-great-grandchildren knowing your name, the type of person you were and the values you held, this is all anyone could ever ask for.

  • Whenever there is a need to restructure an Excel spreadsheet to an acceptable form to be used for a SaaS platform or custom application, my first inclination is to build something in C# to get the spreadsheet into a form I require.

    This week I felt adventurous and decided to break the mundane job of formatting a spreadsheet using an approach I've been reading up on for some time but just never got a chance to apply in a real-world scenario - Google App Scripts.

    What Is A Google App Script?

    Released in 2009, Google App Scripts is a cloud-based platform that allows you to automate tasks across Google Workspace products such as Drive, Docs, Sheets, Calendar, Gmail, etc. You could think of App Scripts as similar to writing a macro in Microsoft Office. They both can automate repeatable tasks and extend the standard features of the application.

    The great thing about Google App Script development is being able to use popular web languages (HTML/CSS/JavaScript) to build something custom. Refreshing when compared to the more archaic option of using VBA in Microsoft Office.

    Some really impressive things can be achieved using App Scripts within the Google ecosystem.

    Google Sheets App Script

    The Google App Script I wrote fulfils the job of taking the contents of cells in a row from one spreadsheet to be copied into another. The aim is to carry out automated field mapping, where the script would iterate through each row from the source spreadsheet and create a new row in the target spreadsheet where the cell value would be placed in a different column.

    This example will demonstrate a very simple approach where the source spreadsheet will contain five columns where each row contains numbers in ascending order to then be copied to the target spreadsheet in descending order.

    Before we add the script, we need to create two spreadsheets:

    • Source sheet: Source - Numbers Ascending
    • Target sheet: Destination - Numbers Descending

    The source sheet should mirror the same structure as the screenshot (below) illustrates.

    Google Sheet - Source

    The target sheet just needs to contain the column headers.

    The App Script can be created by:

    1. Navigating to Extensions > App Scripts from the toolbar. This will open a new tab presenting an interface to manage our scripts.
    2. In the "Files" area, press the "+" and select "Script".
    3. Name the script file: "export-cells-demo.gs".

    Add the following code:

    // Initialiser.
    function run() {
      sendDataToDestinationSpreadSheet();
    }
    
    // Copies values from a source spreadsheet to the target spreadsheet.
    function sendDataToDestinationSpreadSheet() {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
    
      // Get source spreadsheet by its name.
      var sourceSheet = activeSheet.getSheetByName('Source - Numbers Ascending');
    
      // Select the source spreadsheet cells.
      var sourceColumnRange = sourceSheet.getRange('A:E');
      var sourceColumnValues = sourceColumnRange.getValues();
    
      // Get target spreadsheet by its name..
      var targetSheet = activeSheet.getSheetByName('Destination - Numbers Descending');
    
      // Iterate through all rows from the source sheet.
      // Start index at 1 to ignore the column header.
      for(var i = 1; i < sourceColumnValues.length; i++) {
        // Get the cell value for the row.
        var column1 = sourceColumnValues[0,i][0];
        var column2 = sourceColumnValues[0,i][1];
        var column3 = sourceColumnValues[0,i][2];
        var column4 = sourceColumnValues[0,i][3];
        var column5 = sourceColumnValues[0,i][4];
        
        // Use getRange() to get the value position by declaring the row and column number.
        // Use setValue() to copy the value into target spreadsheet column.
        targetSheet.getRange(i+1, 1).setValue(column5);
        targetSheet.getRange(i+1, 2).setValue(column4);
        targetSheet.getRange(i+1, 3).setValue(column3);
        targetSheet.getRange(i+1, 4).setValue(column2);
        targetSheet.getRange(i+1, 5).setValue(column1);
      }
    }
    

    The majority of this script should be self-explanatory with the aid of comments. The only part that requires further explanation is where the values in the target sheet are set, as this is where we insert the numbers for each row in descending order:

    ...
    ...
    targetSheet.getRange(i+1, 1).setValue(column5);
    targetSheet.getRange(i+1, 2).setValue(column4);
    targetSheet.getRange(i+1, 3).setValue(column3);
    targetSheet.getRange(i+1, 4).setValue(column2);
    targetSheet.getRange(i+1, 5).setValue(column1);
    ...
    ...
    

    The getRange function accepts two parameters: Row Number and Column Number. In this case, the row number is acquired from the for loop index as we're using the same row position in both source and target sheets. However, we want to change the position of the columns in order to display numbers in descending order. To do this, I set the first column in the target sheet to contain the value of the last column from the source sheet and carried on from there.

    All the needs to be done now is to run the script by selecting our "run()" function from the App Scripts toolbar and pressing the "Run" button.

    The target spreadsheet should now contain the numbered values for each row in descending order.

    Google Sheet - Target

    Voila! You've just created your first Google App Script in Google Sheets with simple field mapping.

    Conclusion

    Creating my first Google App Script in a real-world scenario to carry out some data manipulation has opened my eyes to the possibilities of what can be achieved without investing additional time developing something like a Console App to do the very same thing.

    There is a slight learning curve involved to understand the key functions required to carry out certain tasks, but this is easily resolved with a bit of Googling and reading through the documentation.

    My journey into Google App Scripts has only just begun and I look forward to seeing what else it has to offer!