Posts written in August 2018

Reducing The Number of 'Crawled - Currently not indexed' Pages

Every few weeks, I check over the health of my site through Google Search Console (aka Webmaster Tools) and Analytics to see how Google is indexing my site and look into potential issues that could affect the click-through rate.

Over the years the content of my site has grown steadily and as it stands it consists of 250 published blog posts. When you take into consideration other potential pages Google indexes - consisting of filter URL's based on grouping posts by tag or category, the number of links that my site consists is increased considerably. It's to the discretion of Google's search algorithm to whether it includes these links for indexing.

Last month, I decided to scrutinise the Search Console Index Coverage report in great detail just to see if there are any improvements I can make to alleviate some minor issues. What I wasn't expecting to see is the large volume of links marked as "Crawled - Currently not indexed".

Crawled Currently Not Indexed - 225 Pages

Wow! 225 affected pages! What does "Crawled - Currently not indexed" mean? According to Google:

The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.

Pretty self-explanatory but not much guidance on the process on how to lessen the number of links that aren't indexed. From my experience, the best place to start is to look at the list of links that are being excluded and to form a judgement based on the page content of these links. Unfortunately, there isn't an exact science. It's a process of trial and error.

Let's take a look at the links from my own 225 excluded pages:

Crawled Currently Not Indexed - Non Indexed Links

On initial look, I could see that the majority of the URL's consisted of links where users can filter posts by either category or tag. I could see nothing content-wise when inspecting these pages for a conclusive reason for index exclusion. However, what I did notice is that these links were automatically found by Google when the site gets spidered. The sitemap I submitted in the Search Console only list out blog posts and content pages.

This led me to believe a possible solution would be to create a separate sitemap that consisted purely of links for these categories and tags. I called it metasitemap.xml. Whenever I added a post, the sitemap's "lastmod" date would get updated, just like the pages listed in the default sitemap.

I created and submitted this new sitemap around mid-July and it wasn't until four days ago the improvement was reported from within the Search Console. The number of non-indexed pages was reduced to 58. That's a 74% reduction!

Crawled Currently Not Indexed - 58 Pages


As I stated above, there isn't an exact science for reducing the number of non-indexed pages as every site is different. Supplementing my site with an additional sitemap just happened to alleviate my issue. But that is not to say copying this approach won't help you. Just ensure you look into the list of excluded links for any patterns.

I still have some work to do and the next thing on my list is to implement canonical tags in all my pages since I have become aware I have duplicate content on different URL's - remnants to when I moved blogging platform.

If anyone has any other suggestions or solutions that worked for them, please leave a comment.

My Top JavaScript ES6 Features

Posted in: Client-side

I've been doing some personal research into improving my own JavaScript development. I decided to get more familiar with the new version of JavaScript - ES6. ES6 is filled to the brim with some really nice improvements that make JavaScript development much more concise and efficient. After having the opportunity to work on React and React Native projects, I had a chance in putting my new found ES6 knowledge to good use!

If I had to describe ES6 in a sentence:

JavaScript has gone on a diet and cut the fat. Write less, do more!

I have only scratched the surface to what ES6 has to offer and will continue to add more to the list as I learn. If you are familiar with server-side development, you might notice some similarities from a syntax perspective. That in itself shows how far ES6 has pushed the boundaries.

Arrow Functions

Arrow functions are beautiful and so easy on the eye when scrolling through vast amounts of code. You'll see with arrow functions, you'll have the option to condense a function that consists of many lines all the way down to single line.

The traditional way we are all familiar with:

// The "old school" way..
function addSomeNumbers(a, b) {
    return a + b;

console.log(addSomeNumbers(1, 2));
// Output: 3


// ES6.
const addSomeNumbers = (a, b) => {
    return a + b;

console.log(addSomeNumbers(1, 2));
// Output: 3

The traditional and ES6 way can still be used in the same way to achieve our desired output. But we can condense out arrow function further:

// Condensed ES6 arrow function.
const addSomeNumbers = (a, b) => a + b;

console.log(addSomeNumbers(1, 2));
// Output: 3

Default Function Parameters

When developing using server-side languages, such as C# you have the ability to set default values on the parameters used for your functions. This is great, since you have more flexibilty in using a function over a wider variety of circumstances without the worry of compiler errors if you haven't satisfied all function parameters.

Lets expand our "addSomeNumbers()" function from our last section to use default parameters.

// Condensed ES6 arrow function with default parameters.
const addSomeNumbers = (a=0, b=0) => a + b;

// Output: 0

This is an interesting (but somewhat useless) example where I am using "addSomeNumbers()" function without passing any parameters. As a result the value 0 is returned and even better - no compiler error.


Destructuring sounds scary and complex. In its simple terms, destructuring is the process of adding values to an object or array to an existing variable more straightforward. Lets start of with a simple object and how we can output these values:

// Some info on my favourite Star Trek starship...
const starship = {
  registry: "NCC-1701-E",
    captain: "Jean Luc Picard",
    launch_date: "October 30, 2372",
    spec: {
      max_warp: 9.995,
      mass: "3,205,000 metric tons",
      length: "685.7 meters",
      width: "250.6 meters",
      height: "88.2 meters"

We would normally output the these values in the following way:

var registry = starship.registry; // Output: NCC-1701-E
var captain = starship.captain; // Output: Jean Luc Picard
var launchDate = starship.launch_date; // Output: October 30, 2372

This works well, but the process of returning those values is a little repetitive and spread over many lines. Lets get a bit more focus and go down the ES6 route:

const { registry, captain, launch_date } = starship;

console.log(registry); // Output: NCC-1701-E 

How amazing is that? We've managed to select a handful of these fields on one line to do something with. 

My final example in the use of destructuring will evolve around an array of items - in this case names of starship captains:

const captains = ["James T Kirk", "Jean Luc Picard", "Katherine Janeway", "Benjamin Sisko"]

Here is how I would return the first two captains in ES5 and ES6:

// ES5
var tos = captains[0];
var tng = captains[1];

// ES6
const [tos, tng ] = captains;

You'll see similarities to our ES6 approach for getting the values out of an array as we did when using an object. The only thing I need to look into is how to get the first and last captain from my array? Maybe that's for a later post.

Before I end the destructuring topic, I'll add this tweet - a visual feast on the basis of what destructuring is...

Spread Operator

The spread operator has to be my favourite ES6 feature, purely because in my JavaScript applications I do a lot of data manipulation. If you can get your head around destructuring and the spread operator, you'll find working with data a lot easier. A spread operator is "...". Yes three dots - ellipsis if you prefer. This allows you to copy the values of an object to be used as a basis of a new object.

In its basic form:

const para1 = ["to", "boldly", "go"];
const para2 = [...para1, "where", "no", "one"];
const para3 = [...para2, "has", "gone", "before"];

console.log(para1); // Output: ["to", "boldly", "go"]
console.log(para2); // Output: ["to", "boldly", "go", "where", "no", "one"]
console.log(para3); // Output: ["to", "boldly", "go", "where", "no", "one", "has", "gone", "before"]

As you can see from my example above, the spread operator used on variables "para1" and "para2" creates a shallow copy of the array values into our new array. Gone are the days of having to use a for loop to get the values.