My New Process For Dynamically Generating Social Share Images

I’ll be the first to admit that I very rarely (if at all!) assign a nice pretty share image to any post that gets shared on social networks. Maybe it’s because I hardly post what I write to social media in the first place! :-) Nevertheless, this isn’t the right attitude. If I am really going to do this, then the whole process needs to be quick and render a share image that sets the tone before that will hopefully entice a potential reader to click on my post.

I started delving into how my favourite developer site, dev.to, manages to create these really simple text-based share images dynamically. They have a pretty good setup as they’ve somehow managed to generate a share image that contains relevant post related information perfectly, such as:

  • Post title
  • Date
  • Author
  • Related Tech Stack Icons

For those who are nosey as I and want to know how dev.to undertakes such functionality, they have kindly written the following post - How dev.to dynamically generates social images.

Since my website is built using the Gatsby framework, I prefer to use a local process to dynamically generate a social image without the need to rely on another third-party service. What's the point in using a third-party service to do everything for you when it’s more fun to build something yourself!

I had envisaged implementing a process that will allow me to pass in the URL of my blog posts to a script, which in turn will render a social image containing basic information about a blog post.

Intro Into Puppeteer

Whilst doing some Googling, one tool kept cropping up in different forms and uses - Puppeteer. Puppeteer is a Node.js library maintained by Google Chrome’s development team and enables us to control any Chrome Dev-Tools based browser through scripts. These scripts can programmatically execute a variety of actions that you would generally do in a browser.

To give you a bit of an insight into the actions Puppeteer can carry out, check out this Github repo. Here you can see Puppeteer is a tool for testing, scraping and automating tasks on web pages. It’s a very useful tool. The only part I spent most of my time understanding was its webpage screenshot feature.

To use Puppeteer, you will first need to install the library package in which two options are available:

  • Puppeteer Core
  • Puppeteer

Puppeteer Core is the more lighter-weight package that can interact with any Dev-Tool based browser you already have installed.

npm install puppeteer-core

You then have the full package that also installs the most recent version of Chromium within the node_modules directory of your project.

npm install puppeteer

I opted for the full package just to ensure I have the most compatible version of Chromium for running Puppeteer.

Puppeteer Webpage Screenshot Script

Now that we have Puppeteer installed, I wrote a script and added it to the root of my Gatsby site. The script carries out the following:

  • Accepts a single argument containing the URL of a webpage. This will be the page containing information about my blog post in a share format - all will become clear in the next section.
  • Approximately screenshot a cropped version of the webpage. In this case 840px x 420px - the exact size of my share image.
  • Use the page name in the URL as the image file name.
  • Store the screenshot in my "Social Share” media directory.
const puppeteer = require('puppeteer');

// If an argument is not provided containing a website URL, end the task.
if (process.argv.length !== 3) {
  console.log("Please provide a single argument containing a website URL.");
  return;
}

const pageUrl = process.argv[2];

const options = {
    path: `./static/media/Blog/Social Share/${pageUrl.substring(pageUrl.lastIndexOf('/') + 1)}.jpg`,
    fullPage: false,
    clip: {
      x: 0,
      y: 0,
      width: 840,
      height: 420
    }
  };
  
  (async () => {
    const browser = await puppeteer.launch({headless: false});
    const page = await browser.newPage()
    await page.setViewport({ width: 1280, height: 800, deviceScaleFactor: 1.5 })
    await page.goto(pageUrl)
    await page.screenshot(options)
    await browser.close()
  })(); 

The script can be run as so:

node puppeteer-screenshot.js http://localhost:8000/socialcard/Blog/2020/07/25/Using-Instagram-API-To-Output-Profile-Photos-In-ASPNET-2020-Edition

I made an addition to my Gatsby project that generated a social share page for every blog post where the URL path was prefixed with /socialcard. These share pages will only be generated when in development mode.

Social Share Page

Now that we have our Puppeteer script, all that needs to be accomplished is to create a nice looking visual for Puppeteer to convert into an image. I wanted some form of automation where blog post information was automatically populated.

I’m starting off with a very simple layout taking some inspiration from dev.to and outputting the following information:

  • Title
  • Date
  • Tags
  • Read time

Working with HTML and CSS isn’t exactly my forte. Luckily for me, I just needed to do enough to make the share image look presentable.

Social Card Page

You can view the HTML and CSS on JSFiddle. Feel free to update and make it better! If you do make any improvements, update the JSFiddle and let me know!

Next Steps

I plan on adding some additional functionality allowing a blog post teaser image (if one is added) to be used as a background and make things look a little more interesting. At the moment the share image is very plain. As you can tell, I keep things really simple as design isn’t my strongest area. :-)

If all goes to plan, when I share this post to Twitter you should see my newly generated share image.

Journey To GatsbyJS: Exporting Kentico Blog Posts To Markdown Files

The first thing that came into my head when testing the waters to start the process of moving over to Gatsby was my blog post content. If I could get my content in a form a Gatsby site accepts then that's half the battle won right there, the theory being it will simplify the build process.

I opted to go down the local storage route where Gatsby would serve markdown files for my blog post content. Everything else such as the homepage, archive, about and contact pages can be static. I am hoping this isn’t something I will live to regret but I like the idea my content being nicely preserved in source control where I have full ownership without relying on a third-party platform.

My site is currently built on the .NET framework using Kentico CMS. Exporting data is relatively straight-forward, but as I transition to a somewhat content-less managed approach, I need to ensure all fields used within my blog posts are transformed appropriately into the core building blocks of my markdown files.

A markdown file can carry additional field information about my post that can be declared at the start of the file, wrapped by triple dashes at the start and end of the block. This is called frontmatter.

Here is a snippet of one of my blog posts exported to a markdown file:

---
title: "Maldives and Vilamendhoo Island Resort"
summary: "At Vilamendhoo Island Resort you are surrounded by serene beauty wherever you look. Judging by the serendipitous chain of events where the stars aligned, going to the Maldives has been a long time in the coming - I just didn’t know it."
date: "2019-09-21T14:51:37Z"
draft: false
slug: "/Maldives-and-Vilamendhoo-Island-Resort"
disqusId: "b08afeae-a825-446f-b448-8a9cae16f37a"
teaserImage: "/media/Blog/Travel/VilamendhooSunset.jpg"
socialImage: "/media/Blog/Travel/VilamendhooShoreline.jpg"
categories: ["Surinder's Log"]
tags: ["holiday", "maldives"]
---

Writing about my holiday has started to become a bit of a tradition (for those that are worthy of such time and effort!) which seem to start when I went to [Bali last year](/Blog/2018/07/06/My-Time-At-Melia-Bali-Hotel). 
I find it's a way to pass the time in airports and flights when making the return journey home. So here's another one...

Everything looks well structured and from the way I have formatted the date, category and tags fields, it will lend itself to be quite accommodating for the needs of future posts. I made the decision to keep the slug value void of any directory structure to give me the flexibility on dynamically creating a URL structure.

Kentico Blog Posts to Markdown Exporter

The quickest way to get the content out was to create a console app to carry out the following:

  1. Loop through all blog posts in post date descending.
  2. Update all images paths used as a teaser and within the content.
  3. Convert rich text into markdown.
  4. Construct frontmatter key-value fields.
  5. Output to a text file in the following naming convention: “yyyy-MM-dd---Post-Title.md”.

Tasks 2 and 3 will require the most effort…

When I first started using Kentico, all references to images were made directly via the file path and as I got more familiar with Kentico, this was changed to use permanent URLs. Using permanent URL’s caused the link to an image to change from "/Surinder/media/Surinder/myimage.jpg", to “/getmedia/27b68146-9f25-49c4-aced-ba378f33b4df /myimage.jpg?width=500”. I need to create additional checks to find these URL’s and transform into a new path.

Finding a good .NET markdown converter is imperative. Without this, there is a high chance the rich text content would not be translated to a satisfactorily standard, resulting in some form of manual intervention to carry out corrections. Combing through 250 posts manually isn’t my idea of fun! :-)

I found the ReverseMarkdown .NET library allowed for enough options to deal with Rich Text to Markdown conversion. I could set in the conversion process to ignore HTML that couldn’t be transformed thus preserving content.

Code

using CMS.DataEngine;
using CMS.DocumentEngine;
using CMS.Helpers;
using CMS.MediaLibrary;
using Export.BlogPosts.Models;
using ReverseMarkdown;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Text;
using System.Text.RegularExpressions;

namespace Export.BlogPosts
{
    class Program
    {
        public const string SiteName = "SurinderBhomra";
        public const string MarkdownFilesOutputPath = @"C:\Temp\BlogPosts\";
        public const string NewMediaBaseFolder = "/media";
        public const string CloudImageServiceUrl = "https://xxxx.cloudimg.io";

        static void Main(string[] args)
        {
            CMSApplication.Init();

            List<BlogPost> blogPosts = GetBlogPosts();

            if (blogPosts.Any())
            {
                foreach (BlogPost bp in blogPosts)
                {
                    bool isMDFileGenerated = CreateMDFile(bp);

                    Console.WriteLine($"{bp.PostDate:yyyy-MM-dd} - {bp.Title} - {(isMDFileGenerated ? "EXPORTED" : "FAILED")}");
                }

                Console.ReadLine();
            }
        }

        /// <summary>
        /// Retrieve all blog posts from Kentico.
        /// </summary>
        /// <returns></returns>
        private static List<BlogPost> GetBlogPosts()
        {
            List<BlogPost> posts = new List<BlogPost>();

            InfoDataSet<TreeNode> query = DocumentHelper.GetDocuments()
                                               .OnSite(SiteName)
                                               .Types("SurinderBhomra.BlogPost")
                                               .Path("/Blog", PathTypeEnum.Children)
                                               .Culture("en-GB")
                                               .CombineWithDefaultCulture()
                                               .NestingLevel(-1)
                                               .Published()
                                               .OrderBy("BlogPostDate DESC")
                                               .TypedResult;

            if (!DataHelper.DataSourceIsEmpty(query))
            {
                foreach (TreeNode blogPost in query)
                {
                    posts.Add(new BlogPost
                    {
                        Guid = blogPost.NodeGUID.ToString(),
                        Title = blogPost.GetStringValue("BlogPostTitle", string.Empty),
                        Summary = blogPost.GetStringValue("BlogPostSummary", string.Empty),
                        Body = RichTextToMarkdown(blogPost.GetStringValue("BlogPostBody", string.Empty)),
                        PostDate = blogPost.GetDateTimeValue("BlogPostDate", DateTime.MinValue),
                        Slug = blogPost.NodeAlias,
                        DisqusId = blogPost.NodeGUID.ToString(),
                        Categories = blogPost.Categories.DisplayNames.Select(c => c.Value.ToString()).ToList(),
                        Tags = blogPost.DocumentTags.Replace("\"", string.Empty).Split(',').Select(t => t.Trim(' ')).Where(t => !string.IsNullOrEmpty(t)).ToList(),
                        SocialImage = GetMediaFilePath(blogPost.GetStringValue("ShareImageUrl", string.Empty)),
                        TeaserImage = GetMediaFilePath(blogPost.GetStringValue("BlogPostTeaser", string.Empty))
                    });
                }
            }

            return posts;
        }

        /// <summary>
        /// Creates the markdown content based on Blog Post data.
        /// </summary>
        /// <param name="bp"></param>
        /// <returns></returns>
        private static string GenerateMDContent(BlogPost bp)
        {
            StringBuilder mdBuilder = new StringBuilder();

            #region Post Attributes

            mdBuilder.Append($"---{Environment.NewLine}");
            mdBuilder.Append($"title: \"{bp.Title.Replace("\"", "\\\"")}\"{Environment.NewLine}");
            mdBuilder.Append($"summary: \"{HTMLHelper.HTMLDecode(bp.Summary).Replace("\"", "\\\"")}\"{Environment.NewLine}");
            mdBuilder.Append($"date: \"{bp.PostDate.ToString("yyyy-MM-ddTHH:mm:ssZ")}\"{Environment.NewLine}");
            mdBuilder.Append($"draft: {bp.IsDraft.ToString().ToLower()}{Environment.NewLine}");
            mdBuilder.Append($"slug: \"/{bp.Slug}\"{Environment.NewLine}");
            mdBuilder.Append($"disqusId: \"{bp.DisqusId}\"{Environment.NewLine}");
            mdBuilder.Append($"teaserImage: \"{bp.TeaserImage}\"{Environment.NewLine}");
            mdBuilder.Append($"socialImage: \"{bp.SocialImage}\"{Environment.NewLine}");

            #region Categories

            if (bp.Categories?.Count > 0)
            {
                CommaDelimitedStringCollection categoriesCommaDelimited = new CommaDelimitedStringCollection();

                foreach (string categoryName in bp.Categories)
                    categoriesCommaDelimited.Add($"\"{categoryName}\"");

                mdBuilder.Append($"categories: [{categoriesCommaDelimited.ToString()}]{Environment.NewLine}");
            }

            #endregion

            #region Tags

            if (bp.Tags?.Count > 0)
            {
                CommaDelimitedStringCollection tagsCommaDelimited = new CommaDelimitedStringCollection();

                foreach (string tagName in bp.Tags)
                    tagsCommaDelimited.Add($"\"{tagName}\"");

                mdBuilder.Append($"tags: [{tagsCommaDelimited.ToString()}]{Environment.NewLine}");
            }

            #endregion

            mdBuilder.Append($"---{Environment.NewLine}{Environment.NewLine}");

            #endregion

            // Add blog post body content.
            mdBuilder.Append(bp.Body);

            return mdBuilder.ToString();
        }

        /// <summary>
        /// Creates files with a .md extension.
        /// </summary>
        /// <param name="bp"></param>
        /// <returns></returns>
        private static bool CreateMDFile(BlogPost bp)
        {
            string markdownContents = GenerateMDContent(bp);

            if (string.IsNullOrEmpty(markdownContents))
                return false;

            string fileName = $"{bp.PostDate:yyyy-MM-dd}---{bp.Slug}.md";
            File.WriteAllText($@"{MarkdownFilesOutputPath}{fileName}", markdownContents);

            if (File.Exists($@"{MarkdownFilesOutputPath}{fileName}"))
                return true;

            return false;
        }

        /// <summary>
        /// Gets the full relative path of an file based on its Permanent URL ID. 
        /// </summary>
        /// <param name="filePath"></param>
        /// <returns></returns>
        private static string GetMediaFilePath(string filePath)
        {
            if (filePath.Contains("getmedia"))
            {
                // Get GUID from file path.
                Match regexFileMatch = Regex.Match(filePath, @"(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}");

                if (regexFileMatch.Success)
                {
                    MediaFileInfo mediaFile = MediaFileInfoProvider.GetMediaFileInfo(Guid.Parse(regexFileMatch.Value), SiteName);

                    if (mediaFile != null)
                        return $"{NewMediaBaseFolder}/{mediaFile.FilePath}";
                }
            }

            // Return the file path and remove the base file path.
            return filePath.Replace("/SurinderBhomra/media/Surinder", NewMediaBaseFolder);
        }

        /// <summary>
        /// Convert parsed rich text value to markdown.
        /// </summary>
        /// <param name="richText"></param>
        /// <returns></returns>
        public static string RichTextToMarkdown(string richText)
        {
            if (!string.IsNullOrEmpty(richText))
            {
                #region Loop through all images and correct the path

                // Clean up tilda's.
                richText = richText.Replace("~/", "/");

                #region Transform Image Url's Using Width Parameter

                Regex regexFileUrlWidth = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})(\?width=([0-9]*))", RegexOptions.Multiline | RegexOptions.IgnoreCase);

                foreach (Match fileUrl in regexFileUrlWidth.Matches(richText))
                {
                    string width = fileUrl.Groups[4] != null ? fileUrl.Groups[4].Value : string.Empty;
                    string newMediaUrl = $"{CloudImageServiceUrl}/width/{width}/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";

                    if (newMediaUrl != string.Empty)
                        richText = richText.Replace(fileUrl.Value, newMediaUrl);
                }

                #endregion

                #region Transform Generic File Url's

                Regex regexGenericFileUrl = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})", RegexOptions.Multiline | RegexOptions.IgnoreCase);

                foreach (Match fileUrl in regexGenericFileUrl.Matches(richText))
                {
                    // Construct media URL required by image hosting company - CloudImage. 
                    string newMediaUrl = $"{CloudImageServiceUrl}/cdno/n/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";

                    if (newMediaUrl != string.Empty)
                        richText = richText.Replace(fileUrl.Value, newMediaUrl);
                }

                #endregion

                #endregion

                Config config = new Config
                {
                    UnknownTags = Config.UnknownTagsOption.PassThrough, // Include the unknown tag completely in the result (default as well)
                    GithubFlavored = true, // generate GitHub flavoured markdown, supported for BR, PRE and table tags
                    RemoveComments = true, // will ignore all comments
                    SmartHrefHandling = true // remove markdown output for links where appropriate
                };

                Converter markdownConverter = new Converter(config);

                return markdownConverter.Convert(richText).Replace(@"[!\", @"[!").Replace(@"\]", @"]");
            }

            return string.Empty;
        }

        /// <summary>
        /// Returns media url without query string values.
        /// </summary>
        /// <param name="mediaUrl"></param>
        /// <returns></returns>
        private static string ClearQueryStrings(string mediaUrl)
        {
            if (mediaUrl == null)
                return string.Empty;

            if (mediaUrl.Contains("?"))
                mediaUrl = mediaUrl.Split('?').ToList()[0];

            return mediaUrl.Replace("~", string.Empty);
        }
    }
}

There is a lot going on here, so let's do a quick breakdown:

  1. GetBlogPosts(): Get all blog posts from Kentico and parse them to a “BlogPost” class object containing all the fields we want to export.
  2. GetMediaFilePath(): Take the image path and carry out all the transformation required to change to a new file path. This method is used in GetBlogPosts() and RichTextToMarkdown() methods.
  3. RichTextToMarkdown(): Takes rich text and goes through a transformation process to relink images in a format that will be accepted by my image hosting provider - Cloud Image. In addition, this is where ReverseMarkdown is used to finally convert to markdown.
  4. CreateMDFile(): Creates the .md file based on the blog posts found in Kentico.

Changing EXIF Date and Time In Raw Files

My Fujifilm X100F camera only comes out of hibernation when I go on holiday. Most of the time, I fail to ensure my camera settings are correct before I take the very first snap. This happened on my last holiday to Loch Lomond.

When it came to the job of carrying out some image processing from RAW to JPEG, I noticed all of my photos EXIF dates were incorrect. I am such stickler for correct EXIF information, including geolocation wherever possible. EXIF information is so useful for cataloging when consumed by photo applications, whether it’s on my Synology or uploaded to Google Photos.

Due to the high number of photos with incorrect date stamps, I needed a tool that will automate the correction process. After a bit of Googling, I found an application called exiftool by Phil Harvey that allows the EXIF date/time stamp to be modified using a method in the documentation called “Shift”.

The exiftool has no GUI (graphical user interface) and will need to be run in Terminal on a Mac or command line for Windows users. The command to use is relatively simple and the only complex thing you will have to do is calculate how many days, months, years, hours, minutes and seconds you need to add or subtract.

In my case, the calculation was a matter of subtracting 3 days from all the photos and the command to do this looks like the following:

exiftool -AllDates-='0:0:3 0:0:0' -m /Volumes/LochLomond

Lets breakdown the command to get a better understanding what each part does.

  • exiftool: Runs the application and you have to ensure that your Terminal/Command Line is run in the same directory exiftool is housed.
  • AllDates: Modifies all dates in a photo.
  • -=‘0:0:3 0:0:0’: Subtract 3 days off the photos exif date. If you wanted to add 3 days, use “+=” instead. The date time format is presented as “<year>:<month>:<day> <hours>:<minute>:<second>”.
  • -m: Ignore minor errors and warnings (as stated in the documentation).
  • /Volumes/LochLomond: Location to where all the photos reside.

When making mass changes to files, it’s always recommended to ensure you have a back up of all photos for you too fallback on if you accidentally mess up the EXIF update.

My Essential iPad Accessories and Applications

Following up on my previous post about the joy that is using my new iPad Air, I thought I’d write about what I deem are essential accessories and applications. It’s only been a couple of weeks since making my purchase and has surprisingly found the transition from Android to iOS not too much of a pain. It’s fast becoming part of my daily workflow for creative writing and note-taking.

Here are some applications and accessories I use…

Accessories

Keyboard Case

Apple’s own Smart Keyboard Cover felt very unnatural to use and didn’t provide enough protection for my nice new tablet. The Inateck Keyboard Case is an absolute pleasure to use and the keys have a very nice responsive rebound. I can literally use this anywhere and feels just as stable on my lap as it is when being used on a desk.

The only downside is the connectivity relies on Bluetooth rather than Apple’s own Smart connector which would normally power the keyboard. Nevertheless, the pairing has no latency and the battery lasts weeks even with daily usage.

Apple Pencil

The iPad Air is only compatible with the first generation pencil and has a really ridiculous way to charge using the lightning connector. Apple could have quite easily made the iPad Air work with the second generation pencil. If the iPad Pro was a cracker, then the second generation pencil would be the caviar.

Regardless of the design, it’s refreshing to scribble away notes to store electronically. Previously to keep track of my written notes, I would write on paper (oh how old fashioned!?) and then scan digitally using Evernote on my phone.

Draw Screen Protector

Writing on glass using the Apple Pencil is a little slippery and need something that gives the texture to almost simulate the friction you would get when writing on paper. There are a handful of screen protectors that provide this with varying degrees of success. The most popular being is Paperlike, which I plan on putting an order for when I’ve worn out my current screen protector.

My current screen protector is Nillkin and isn’t too bad. It provides adequate protection as well as giving enough texture with enough anti-reflection qualities that doesn’t hinder screen visibility. Added bonus: a nice light scratchy sound as you'd expect if writing with an old-fashioned pencil!

Applications

I'm deliberately leaving out the most obvious and well-known apps that we are well aware of such as YouTube, Netflix, Gmail, Kindle, Twitter, Spotify etc.

Jump Desktop

I wrote about this very briefly in my previous post. If you want a link to your laptop/workstation from your iPad, Jump Desktop is your best option. Once you have the application installed on your iPad and host machine you are up and running in minutes. Judging by past updates, it’s getting better with every release.

Evernote

I don’t think I can speak about Evernote highly enough. I am a premium member and is one of my most highly used applications across all mediums. Worth every penny! It organises my notes, scribbles and agendas with little effort.

Evernote is effectively my brain dump of ideas.

Notes haven’t looked so good with the use of a recent feature - Templates. On creation of a new note, you have the option to select a predefined template based from the many Evernote provides from their own Template Gallery.

Grammarly

Grammarly is a must for all writers to improve the readability of your content. I myself had only started using Grammarly since last year and now can't think of writing a post without it. In the iPad form, Grammarly forms part of the keyboard that carries out checks as you type. This works quite well with my writing workflow when using Evernote.

Autodesk Sketchbook

If the Apple Pencil has done anything for me, is to allow me to experiment more with what it can do and in the process allowing me to try things I don’t generally do. In this case, sketch! I would be lying if I said Autodesk Sketchbook is the best drawing apps out there as I haven’t used any others. For an app that is free, it has a wide variety of features that will accommodate both novice and experts alike.

1.1.1.1

Developed by the team who brought you the Cloudflare CDN infrastructure comes 1.1.1.1, an app for providing faster and more private internet. This is something I always have running in the background to have a form or protection using public hotspots and to stop my ISP from snooping where I go on the internet.

When compared to other DNS directory services, Cloudflare touts 1.1.1.1 as the fastest. As everything you do on the internet starts with a DNS request, choosing the fastest DNS directory will accelerate the online experience.

Export Changes Between Two Git Commits In SourceTree

I should start off by saying how much I love TortoiseGit and it has always been the reliable source control medium, even though it's a bit of a nightmare to set up initially to work alongside Bitbucket. But due to a new development environment for an external project, I am kinda forced to use preinstalled Git programs:

  • SourceTree
  • Git Bash

I am more inclined to use a GUI when interacting with my repositories and use the command line when necessary.

One thing that has been missing from SourceTree ever since it was released, is the ability to export changes over multiple commits. I was hoping after many years this feature would be incorporated. Alas, no. After Googling around, I came across a StackOverflow post that showed the only way to export changes in Sourcetree based on multiple commits is by using a combination of the git archive and git diff commands:

git archive --output=archived_changes.zip HEAD $(git diff --diff-filter=ACMRTUXB --name-only hash1 hash2)

This can be run directly using the Terminal window for a repository in Sourcetree. The "hash1" and "hash2" values are the long 40 character length commit ID's.

The StackOverflow post has helped me in what I needed to achieve and as a learning process, I want to take things a step further in my post to understand what the archive command is actually doing for my own learning. So let's dissect the command into manageable chunks.

Part 1

git archive --output=archived_changes.zip HEAD

This creates the archive of the whole repository into a zip file. We can take things further in the next section to select the commits we need.

Part 2

git diff --diff-filter=ACMRTUXB

The git diff command shows changes in between commits. The filter option gives us more flexibility to select the files that are:

  • A Added
  • C Copied
  • D Deleted
  • M Modified
  • R Renamed
  • T have their type (mode) changed
  • U Unmerged
  • X Unknown
  • B have had their pairing Broken

Part 3

--name-only hash1 hash2

The second part of the git diff command uses the "name-only" option that just shows which files have changed over multiple commits based on the hash values entered.

Part 4

The git diff command needs to be wrapped around parentheses to act as a parameter for the git archive command.

Switch Branches In TortoiseGit

My day to day version control system is Bitbucket. I never got on with their own Git GUI offering - Sourcetree. I always found using TortoiseGit much more intuitive to use and a flexible way to interact with my git repository. If anyone can change my opinion on this, I am all ears!

I work with large projects that are around a couple hundred megabytes in size and if I were to clone the same project over different branches, it can use up quite a bit of hard disk space. I like to quickly switch to my master branch after carrying out a merge for testing before carrying out a release.

Luckily TortoiseGit makes switching branches a cinch in just a few clicks:

  • Right-click in your repository
  • Go to TortoiseGit context menu
  • Click Switch/Checkout
  • Select the branch you wish to switch to and select "Overwrite working tree changes (force)"

TortoiseGit Switch Branches

Selecting the "Overwrite working tree changes (force)" tick box is important to ensure all files in your working directory is overwritten with the files directly from your branch. We do not want remnants of files left from the branch we had previously switched from kicking around.

Powershell Script To Clear Old IIS Logs

If you have many sites running on your installation of Windows Server, you will soon find that there will be an accumulation of logs generated by IIS. Through my niavity, I presumed that there is a default setting in IIS that would only retain logs for a specific period of time. It is only when I started noticing over the last few weeks the hard disk space was slowly getting smaller and smaller.

Due to my sheer embaressment, I won't divulge how much space the logs had taken up. All I can say, it was quite a substantial amount. :-)

After some Googling online, I came across a Powershell script (which can be found here), that solved all my problems. The script targets your IIS logs folder and recusively looks for any file that contains ".log" for deletion. Unfortunately, the script did not run without making some minor modifications to the original source. This is due to changes in versions of Powershell since the post was written 3 years ago.

$logPath = "C:\inetpub\logs\LogFiles" 
$maxDaystoKeep = -5
$cleanupRecordPath = "C:\Log_Cleanup.log" 

$itemsToDelete = dir $logPath -Recurse -File *.log | Where LastWriteTime -lt ((get-date).AddDays($maxDaystoKeep)) 

If ($itemsToDelete.Count -gt 0)
{ 
    ForEach ($item in $itemsToDelete)
    { 
        "$($item.FullName) is older than $((get-date).AddDays($maxDaystoKeep)) and will be deleted." | Add-Content $cleanupRecordPath 
        Remove-Item $item.FullName -Verbose 
    } 
} 
Else
{ 
    "No items to be deleted today $($(Get-Date).DateTime)." | Add-Content $cleanupRecordPath 
}    

Write-Output "Cleanup of log files older than $((get-date).AddDays($maxDaystoKeep)) completed!" 

Start-Sleep -Seconds 10

If you're ever so inclined, hook this script up to a Scheduled Task to run on a daily basis to keep your log files in order.

Evernote Has Made Me An Extreme Data Hoarder

EvernoteOk. So for those of you have not heard of Evernote (and who hasn't!?), it's an online app/service that allows you to record voice, text and hand written notes that can synchronise across multiple devices and platforms.

Ever since I had my first smartphone, I've always relied on Evernote to record my daily thoughts and reminders. There are numerous note taking apps on the market, which (for me) just doesn't seem to cut the mustard and end up always coming back.

Evernote not only has the functionality, but it also has the infrastructure to make it more than just a "note taking" platform. So much so I'm hoarding major amounts of everyday things. Evernote is starting to act as a repository of things I don't want to let go of.

With the help of IFTTT, I have created numerous recipes that aggregate data from my social platforms such as Instagram and Twitter to importing RSS feeds from websites that interest me. Now Evernote is my one-stop-shop for getting everything I need on a daily basis instead of logging into different platforms individually.

If there is something I happen to like, I just Evernote it. Even if I won't ever need it. Typical sign of a hoarder! But I'm an organised data hoarder, utilising clearly named notebook stacks. Strangely enough, the more notes you add, the more useful Evernote becomes and this maybe the reason why I am hoarding so many things. It's more than a "note taker"!

One feature I didn't expect to be so useful was the ability to take pictures of printed or handwritten documents. I can take quick snapshots and go completely paperless. On top of that, Evernote makes everything searchable. It's even clever enough to search through my rubbishly written notes. I only found out how truly powerful this feature until I was going through the motions of purchasing my first property. At this time of my life, I was in constant note/documentation mode and Evernote helped me organise my thoughts, reminders and record all email correpondence neatly.

What I've done in the past with other note taking apps is delete old notes or files just to be completely sure that I will be able to search what I require quickly and easily, mainly due to the fact that sifting through large volumes of data was a headache! Nowadays, I don't delete anything in Evernote. I can now keep a record of things I previously done and refer to later without any worries at time of need.

It's safe to say my addiction to Evernote will only increase as I find more uses for it. But that's not a bad thing...right?

Update - 12/12/2014

I came across some posts from others with the same issue, which is nice to know that it's not only me with a problem:

The Easy Way To Run a PHP Site In A Windows Environment

Even though my programming weapon of choice is .NET C#, there are times (unfortunate times!) where I need to dabble in a bit if PHP. Being a .NET developer means I do not have the setup to run PHP based sites such as Apache and MySQL.

In the past I have tried to create an Apache configured server but I could never get it running 100% - possibly because I didn't have the patience or could justify the additional time required for setup when I could be working on a PHP site once in a blue moon...

Last year, I came across a program called EasyPHP that allowed me to install a local instance of Apache and MySQL altogether in just one installation. It made it really to get up and running without all the setup and configuration hassle.

Once installed you can create numerous websites and MySQL instances in a version of your own choosing. Wicked! I never been so excited about PHP in my life!

I have only scratched the surface on the features EasyPHP provides and whenever I did need to use it, there has always been great improvements. Take a look at their site for more information:http://www.easyphp.org.

So if you're a Windows man who needs to carry out PHP odds and ends, can't recommend EasyPHP enough.

Windows 2008 Task Scheduler Result Codes

I’ve been working on a PowerShell script that required to be automatically run every 5 minutes. As you probably guessed, using Windows Task Scheduler is the way to go.

Prior to assigning any scripts or programs to a scheduled task, I always run them manually first to ensure all issues are rectified.  We all know if there is an issue whilst running within Task Scheduler, Windows likes to help us by showing us some ambiguous error/success codes.

Luckily, MSDN provides a comprehensive list of these codes that can be found here: http://msdn.microsoft.com/en-us/library/aa383604

But there more common codes are listed below:

0 or 0x0: The operation completed successfully.
1 or 0x1: Incorrect function called or unknown function called.
2 or 0x2: File not found.
10 or 0xa: The environment is incorrect.
0x41300: Task is ready to run at its next scheduled time.
0x41301: Task is currently running.
0x41302: Task is disabled.
0x41303: Task has not yet run.
0x41304: There are no more runs scheduled for this task.
0x41306: Task is terminated.
0x8004131F: An instance of this task is already running.
0x800704DD: The service is not available (is 'Run only when an user is logged on' checked?)
0xC000013A: The application terminated as a result of a CTRL+C.
0xC06D007E: Unknown software exception.