Blog

Blogging on programming and life in general.

Kentico MVC - Getting TreeNode At Controller Level

Posted in: Kentico

I wrote a post a couple of years ago regarding my observations on developing a Kentico site using MVC in version 9. Ever since Kentico 9, there was a shift in how MVC applications are to be developed, which has pretty much stood the test of time as we've seen in releases since then. The biggest change being the CMS itself is purely used to manage content and the presentation layer is a separate MVC application connected through a Web Farm interface.

The one thing I missed when working on sites in Kentico's MVC is the ability to get values from the current document as you could do in Kentico 8 MVC builds:

public ActionResult Article()
{
    TreeNode page = DocumentContext.CurrentDocument;

    // You might want to do something complex with the TreeNode here...

    return View(page);
}

In Kentico 11, the approach is to use wrapper classes using the Code Generator feature the Kentico platform offers from inside your Page Type. The Kentico documentation describes this approach quite aptly:

The page type code generators allow you to generate both properties and providers for your custom page types. The generated properties represent the page type fields. You can use the providers to work with published or latest versions of a specific page type.

You can then use these generated classes inside your controllers to retrieve page type data.

Custom Route Constraint

In order to go down a similar approach to get the current document just like in Kentico 8, we'll need to modify our MVC project and add a custom route constraint called CmsUrlConstraint. The custom route constraint will call DocumentHelper.GetDocument() method and return a TreeNode object based on the Node Alias path.

CmsUrlConstraint

Every Page Type your MVC website consists of will need to be listed in this route constraint, which will in turn direct the incoming request to a controller action and store the Kentico page information within a HttpContext if there is a match. To keeps things simple, the route constraint contains the following pages:

  • Home
  • Blog
  • Blog Month
  • Blog Post
public static class RouteConstraintExtension
{
    /// <summary>
    /// Set a new route.
    /// </summary>
    /// <param name="values"></param>
    /// <param name="controller"></param>
    /// <param name="action"></param>
    /// <returns></returns>
    public static RouteValueDictionary SetRoute(this RouteValueDictionary values, string controller, string action)
    {
        values["controller"] = controller;
        values["action"] = action;

        return values;
    }

    #region CMS Url Contraint

    public class CmsUrlConstraint : IRouteConstraint
    {
        /// <summary>
        /// Check for a CMS page for the current route.
        /// </summary>
        /// <param name="httpContext"></param>
        /// <param name="route"></param>
        /// <param name="parameterName"></param>
        /// <param name="values"></param>
        /// <param name="routeDirection"></param>
        /// <returns></returns>
        public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection)
        {
            string pageUrl = values[parameterName] == null ? "/Home" : $"/{values[parameterName].ToString()}";

            // Check if the page is being viewed in preview.
            bool previewEnabled = HttpContext.Current.Kentico().Preview().Enabled;

            // Ignore the site resource directory containing Image, CSS and JS assets to save call to Kentico API.
            if (pageUrl.StartsWith("/resources"))
                return false;

            // Get page from Kentico by alias path in its published or unpublished state.
            // PageLogic.GetByNodeAliasPath() method carries out the document lookup by Node Alias Path.
            TreeNode page = PageLogic.GetByNodeAliasPath(pageUrl, previewEnabled);

            if (page != null)
            {
                // Store current page in HttpContext.
                httpContext.Items["CmsPage"] = page;

                #region Map MVC Routes

                // Set the routing depending on the page type.
                if (page.ClassName == "CMS.Home")
                    values.SetRoute("Home", "Index");

                if (page.ClassName == "CMS.Blog" ||  page.ClassName == "CMS.BlogMonth")
                    values.SetRoute("Blog", "Index");

                if (page.ClassName == "CMS.BlogPost")
                    values.SetRoute("Blog", "Post");

                #endregion

                if (values["controller"].ToString() != "Page")
                    return true;
            }

            return false;
        }
    }

    #endregion
}

To ensure page data is returned from Kentico in an optimum way, I have a PageLogic.GetByNodeAliasPath() method that ensures cache dependencies are used if the page is not in preview mode.

Apply Route Constraint To RouteConfig

public class RouteConfig
{
    public static void RegisterRoutes(RouteCollection routes)
    {
        ...

        // Maps routes to Kentico HTTP handlers.
        // Must be first, since some Kentico URLs may be matched by the default ASP.NET MVC routes,
        // which can result in pages being displayed without images.
        routes.Kentico().MapRoutes();

        // Custom MVC constraint validation to check for a CMS template, otherwise fallback to default MVC routing.
        routes.MapRoute(
            name: "CmsRoute",
            url: "{*url}",
            defaults: new { controller = "HttpErrors", action = "NotFound" },
            constraints: new { url = new CmsUrlConstraint() }
        );

        ...
    }
}

Usage In Controller

Now that we have created our route constraint and applied it to our RouteConfig, we can now enjoy the fruits of our labour by getting back the document TreeNode from HttpContext. The code sample below demonstrates getting some values for our Home controller.

public class HomeController : Controller
{
    public ActionResult Index()
    {
        TreeNode currentPage = HttpContext.Items["CmsPage"] as TreeNode;

        if (currentPage != null)
        {
            HomeViewModel homeModel = new HomeViewModel
            {
                Title = currentPage.GetStringValue("Title", string.Empty),
                Introduction = currentPage.GetStringValue("Introduction", string.Empty)
            };

            return View(homeModel);
        }

        return HttpNotFound();
    }
}

Conclusion

There is no right or wrong in terms of the approach you as a Kentico developer use when getting data out from your page types. Depending on the scale of the Kentico site I am working on, I interchange between the approach I detail in this post and Kentico's documented approach.

ASP.NET Core - Get Page Title By URL

Posted in: ASP.NET

To make it easy for a client to add in related links to pages like a Blog Post or Article, I like implementing some form of automation so there is one less thing to content manage. For a Kentico Cloud project, I took this very approach. I created a UrlHelper class that will carry out the following:

  • Take in an absolute URL.
  • Read the markup of the page.
  • Selects the title tag using Regex.
  • Remove the site name prefix from title text.
using Microsoft.Extensions.Caching.Memory;
using MyProject.Models.Site;
using System;
using System.IO;
using System.Linq;
using System.Net;
using System.Text.RegularExpressions;

namespace MyProject.Helpers
{
    public class UrlHelper
    {
        private static IMemoryCache _cache;

        public UrlHelper(IMemoryCache memCache)
        {
            _cache = memCache;
        }

        /// <summary>
        /// Returns the a title and URL of the link directly from a page.
        /// </summary>
        /// <param name="url"></param>
        /// <returns></returns>
        public PageLink GetPageTitleFromUrl(string url)
        {
            if (!string.IsNullOrEmpty(url))
            {
                if (_cache.TryGetValue(url, out PageLink page))
                {
                    return page;
                }
                else
                {
                    using (WebClient client = new WebClient())
                    {
                        try
                        {
                            Stream stream = client.OpenRead(url);
                            StreamReader streamReader = new StreamReader(stream, System.Text.Encoding.GetEncoding("UTF-8"));

                            // Get contents of the page.
                            string pageHtml = streamReader.ReadToEnd();

                            if (!string.IsNullOrEmpty(pageHtml))
                            {
                                // Get the title.
                                string title = Regex.Match(pageHtml, @"\<title\b[^>]*\>\s*(?<Title>[\s\S]*?)\</title\>", RegexOptions.IgnoreCase).Groups["Title"].Value;

                                if (!string.IsNullOrEmpty(title))
                                {
                                    if (title.Contains("|"))
                                        title = title.Split("|").First();
                                    else if (title.Contains(":"))
                                        title = title.Split(":").First();

                                    PageLink pageLink = new PageLink
                                    {
                                        PageName = title,
                                        PageUrl = url
                                    };

                                    _cache.Set(url, pageLink, DateTimeOffset.Now.AddHours(12));

                                    page = pageLink;
                                }
                            }

                            // Cleanup.
                            stream.Flush();
                            stream.Close();
                            client.Dispose();
                        }
                        catch (WebException e)
                        {
                            throw e;
                        }
                    }
                }

                return page;
            }
            else
            {
                return null;
            }
        }
    }
}

The method returns a PageLink object:

namespace MyProject.Models.Site
{
    public class PageLink
    {
        public string PageName { get; set; }
        public string PageUrl { get; set; }
    }
}

From an efficiency standpoint, I cache the process for 12 hours as going through the process of reading the markup of a page can be quite expensive if there is a lot of HTML.

Cache Busting Kentico

Posted in: Guest Writing

When developing a website that is quick to load on all devices, caching from both a data and asset perspective is very important. Luckily for us, Kentico provides a comprehensive approach to caching data in order to minimise round-trips to the database. But what about asset caching, such as images, CSS and JavaScript files?

A couple days ago, I wrote an article on the Syndicut Medium publication on how I have added cache busting functionality in our Kentico CMS builds. I am definitely interested to hear what the approaches other developers from the Kentico network take in order to cache bust their own website assets.

Take a read here: https://medium.com/syndicutstudio/cache-busting-kentico-cf89496ffda0.

ASP.NET Core - Render Partial View To String Outside Controller Context

Posted in: ASP.NET

When building MVC websites, I cannot get through a build without using a method to convert a partial view to a string. I have blogged about this in the past and find this approach so useful especially when carrying out heavy AJAX processes. Makes the whole process of maintaining and outputting markup dynamically a walk in the park.

I've been dealing with many more ASP.NET Core builds and migrating over the RenderPartialViewToString() extension I developed previously was not possible. Instead, I started using the approach detailed in the following StackOverflow post: Return View as String in .NET Core. Even though the approach was perfectly acceptable and did the job nicely, I noticed I had to make one key adjustment - allow for views outside controller context.

The method proposed in the StackOverflow post uses ViewEngine.FindView(), from what I gather only returns a view within the current controller context. I added a check that will use ViewEngine.GetView() if a path of the view ends with a ".cshtml" which is normally the approach used when you refer to a view from a different controller by using a relative path.

public static class ControllerExtensions
{
    /// <summary>
    /// Render a partial view to string.
    /// </summary>
    /// <typeparam name="TModel"></typeparam>
    /// <param name="controller"></param>
    /// <param name="viewNamePath"></param>
    /// <param name="model"></param>
    /// <returns></returns>
    public static async Task<string> RenderViewToStringAsync<TModel>(this Controller controller, string viewNamePath, TModel model)
    {
        if (string.IsNullOrEmpty(viewNamePath))
            viewNamePath = controller.ControllerContext.ActionDescriptor.ActionName;

        controller.ViewData.Model = model;

        using (StringWriter writer = new StringWriter())
        {
            try
            {
                IViewEngine viewEngine = controller.HttpContext.RequestServices.GetService(typeof(ICompositeViewEngine)) as ICompositeViewEngine;

                ViewEngineResult viewResult = null;

                if (viewNamePath.EndsWith(".cshtml"))
                    viewResult = viewEngine.GetView(viewNamePath, viewNamePath, false);
                else
                    viewResult = viewEngine.FindView(controller.ControllerContext, viewNamePath, false);

                if (!viewResult.Success)
                    return $"A view with the name '{viewNamePath}' could not be found";

                ViewContext viewContext = new ViewContext(
                    controller.ControllerContext,
                    viewResult.View,
                    controller.ViewData,
                    controller.TempData,
                    writer,
                    new HtmlHelperOptions()
                );

                await viewResult.View.RenderAsync(viewContext);

                return writer.GetStringBuilder().ToString();
            }
            catch (Exception exc)
            {
                return $"Failed - {exc.Message}";
            }
        }
    }
}

Quick Example

As you can see from my quick example below, the Home controller is using the RenderViewToStringAsync() when calling:

  • A view from another controller, where a relative path to the view is used.
  • A view from within the realms of the current controller and the name of the view alone can be used.
public class HomeController : Controller
{
    public async Task<IActionResult> Index()
    {
        NewsListItem newsItem = GetSingleNewsItem(); // Get a single news item.

        string viewFromAnotherController = await this.RenderViewToStringAsync("~/Views/News/_NewsList.cshtml", newsItem);
        string viewFromCurrentController = await this.RenderViewToStringAsync("_NewsListHome", newsItem);

        return View();
    }
}

 

ASP.NET Core - HTTP Error 502.5 Process Failure

Posted in: ASP.NET

It seems whenever I work on an ASP.NET Core website, I always seem to get the most unhelpful error when deploying to production:

HTTP Error 502.5 - Process Failure

I have no problem running the ASP.NET Core site whilst developing from within a local environment.

From past experience, the HTTP 502.5 error generally happens for the following reasons:

  1. The ASP.NET Core framework is not installed or your site is running the incorrect version.
  2. Website project incorrectly published.
  3. Potential configuration issue at code level.

Generally when you successfully publish a deployable version of your site, you'd expect it to just work. To get around the deployment woes, the solution is to modify your .csproj file by adding the following setting:

<PropertyGroup>
   <PublishWithAspNetCoreTargetManifest>false</PublishWithAspNetCoreTargetManifest>
</PropertyGroup>

Once this setting has been added, you'll notice when your site is re-published a whole bunch of new DLL files are now present, forming part of all the dependencies a site requires. It's strange a normal publish does not do this already and what's even stranger is I have a different .NET Core site running without having to take this approach.

For any new .NET Core sites I work on, I will be using approach going forward.

Useful Links

Reducing The Number of 'Crawled - Currently not indexed' Pages

Every few weeks, I check over the health of my site through Google Search Console (aka Webmaster Tools) and Analytics to see how Google is indexing my site and look into potential issues that could affect the click-through rate.

Over the years the content of my site has grown steadily and as it stands it consists of 250 published blog posts. When you take into consideration other potential pages Google indexes - consisting of filter URL's based on grouping posts by tag or category, the number of links that my site consists is increased considerably. It's to the discretion of Google's search algorithm to whether it includes these links for indexing.

Last month, I decided to scrutinise the Search Console Index Coverage report in great detail just to see if there are any improvements I can make to alleviate some minor issues. What I wasn't expecting to see is the large volume of links marked as "Crawled - Currently not indexed".

Crawled Currently Not Indexed - 225 Pages

Wow! 225 affected pages! What does "Crawled - Currently not indexed" mean? According to Google:

The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.

Pretty self-explanatory but not much guidance on the process on how to lessen the number of links that aren't indexed. From my experience, the best place to start is to look at the list of links that are being excluded and to form a judgement based on the page content of these links. Unfortunately, there isn't an exact science. It's a process of trial and error.

Let's take a look at the links from my own 225 excluded pages:

Crawled Currently Not Indexed - Non Indexed Links

On initial look, I could see that the majority of the URL's consisted of links where users can filter posts by either category or tag. I could see nothing content-wise when inspecting these pages for a conclusive reason for index exclusion. However, what I did notice is that these links were automatically found by Google when the site gets spidered. The sitemap I submitted in the Search Console only list out blog posts and content pages.

This led me to believe a possible solution would be to create a separate sitemap that consisted purely of links for these categories and tags. I called it metasitemap.xml. Whenever I added a post, the sitemap's "lastmod" date would get updated, just like the pages listed in the default sitemap.

I created and submitted this new sitemap around mid-July and it wasn't until four days ago the improvement was reported from within the Search Console. The number of non-indexed pages was reduced to 58. That's a 74% reduction!

Crawled Currently Not Indexed - 58 Pages

Conclusion

As I stated above, there isn't an exact science for reducing the number of non-indexed pages as every site is different. Supplementing my site with an additional sitemap just happened to alleviate my issue. But that is not to say copying this approach won't help you. Just ensure you look into the list of excluded links for any patterns.

I still have some work to do and the next thing on my list is to implement canonical tags in all my pages since I have become aware I have duplicate content on different URL's - remnants to when I moved blogging platform.

If anyone has any other suggestions or solutions that worked for them, please leave a comment.

My Top JavaScript ES6 Features

Posted in: Client-side

I've been doing some personal research into improving my own JavaScript development. I decided to get more familiar with the new version of JavaScript - ES6. ES6 is filled to the brim with some really nice improvements that make JavaScript development much more concise and efficient. After having the opportunity to work on React and React Native projects, I had a chance in putting my new found ES6 knowledge to good use!

If I had to describe ES6 in a sentence:

JavaScript has gone on a diet and cut the fat. Write less, do more!

I have only scratched the surface to what ES6 has to offer and will continue to add more to the list as I learn. If you are familiar with server-side development, you might notice some similarities from a syntax perspective. That in itself shows how far ES6 has pushed the boundaries.

Arrow Functions

Arrow functions are beautiful and so easy on the eye when scrolling through vast amounts of code. You'll see with arrow functions, you'll have the option to condense a function that consists of many lines all the way down to single line.

The traditional way we are all familiar with:

// The "old school" way..
function addSomeNumbers(a, b) {
    return a + b;
}

console.log(addSomeNumbers(1, 2));
// Output: 3

ES6:

// ES6.
const addSomeNumbers = (a, b) => {
    return a + b;
}

console.log(addSomeNumbers(1, 2));
// Output: 3

The traditional and ES6 way can still be used in the same way to achieve our desired output. But we can condense out arrow function further:

// Condensed ES6 arrow function.
const addSomeNumbers = (a, b) => a + b;

console.log(addSomeNumbers(1, 2));
// Output: 3

Default Function Parameters

When developing using server-side languages, such as C# you have the ability to set default values on the parameters used for your functions. This is great, since you have more flexibilty in using a function over a wider variety of circumstances without the worry of compiler errors if you haven't satisfied all function parameters.

Lets expand our "addSomeNumbers()" function from our last section to use default parameters.

// Condensed ES6 arrow function with default parameters.
const addSomeNumbers = (a=0, b=0) => a + b;

console.log(addSomeNumbers());
// Output: 0

This is an interesting (but somewhat useless) example where I am using "addSomeNumbers()" function without passing any parameters. As a result the value 0 is returned and even better - no compiler error.

Destructuring

Destructuring sounds scary and complex. In its simple terms, destructuring is the process of adding values to an object or array to an existing variable more straightforward. Lets start of with a simple object and how we can output these values:

// Some info on my favourite Star Trek starship...
const starship = {
  registry: "NCC-1701-E",
    captain: "Jean Luc Picard",
    launch_date: "October 30, 2372",
    spec: {
      max_warp: 9.995,
      mass: "3,205,000 metric tons",
      length: "685.7 meters",
      width: "250.6 meters",
      height: "88.2 meters"
  }
};

We would normally output the these values in the following way:

var registry = starship.registry; // Output: NCC-1701-E
var captain = starship.captain; // Output: Jean Luc Picard
var launchDate = starship.launch_date; // Output: October 30, 2372

This works well, but the process of returning those values is a little repetitive and spread over many lines. Lets get a bit more focus and go down the ES6 route:

const { registry, captain, launch_date } = starship;

console.log(registry); // Output: NCC-1701-E 

How amazing is that? We've managed to select a handful of these fields on one line to do something with. 

My final example in the use of destructuring will evolve around an array of items - in this case names of starship captains:

const captains = ["James T Kirk", "Jean Luc Picard", "Katherine Janeway", "Benjamin Sisko"]

Here is how I would return the first two captains in ES5 and ES6:

// ES5
var tos = captains[0];
var tng = captains[1];

// ES6
const [tos, tng ] = captains;

You'll see similarities to our ES6 approach for getting the values out of an array as we did when using an object. The only thing I need to look into is how to get the first and last captain from my array? Maybe that's for a later post.

Before I end the destructuring topic, I'll add this tweet - a visual feast on the basis of what destructuring is...

Spread Operator

The spread operator has to be my favourite ES6 feature, purely because in my JavaScript applications I do a lot of data manipulation. If you can get your head around destructuring and the spread operator, you'll find working with data a lot easier. A spread operator is "...". Yes three dots - ellipsis if you prefer. This allows you to copy the values of an object to be used as a basis of a new object.

In its basic form:

const para1 = ["to", "boldly", "go"];
const para2 = [...para1, "where", "no", "one"];
const para3 = [...para2, "has", "gone", "before"];

console.log(para1); // Output: ["to", "boldly", "go"]
console.log(para2); // Output: ["to", "boldly", "go", "where", "no", "one"]
console.log(para3); // Output: ["to", "boldly", "go", "where", "no", "one", "has", "gone", "before"]

As you can see from my example above, the spread operator used on variables "para1" and "para2" creates a shallow copy of the array values into our new array. Gone are the days of having to use a for loop to get the values.

ASP.NET Core MVC Numbered Pagination

Posted in: ASP.NET

This is a relatively simple pagination that will only be shown if there are enough items of data to paginate through. The user will have the ability to paginate by either clicking on the "Previous" and "Next" links as well as clicking on the individual page numbers from within the pagination.

I created a PaginationHelper.CreatePagination() method that carries out all the paging calculations and outputs the pagination as an unordered list. The method requires the following parameters:

  • currentPage - the current page number being viewed.
  • totalNumberOfRecords - the total count of records from your dataset in order to determine how many pages should be displayed.
  • pageRequest - the current request from by passing in "HttpContext.Request" to get the page URL.
  • noOfPageLinks - the number of page numbers that should be shown. For example "1, 2, 3, 4".
  • pageSize - the number of items will be shown per page.
using Microsoft.AspNetCore.Http;
using System;
using System.Text;

namespace MyProject.Helpers
{
    public static class PaginationHelper
    {
        /// <summary>
        /// Renders pagination used in listing pages.
        /// </summary>
        /// <param name="currentPage"></param>
        /// <param name="totalNumberOfRecords"></param>
        /// <param name="pageRequest">Current page request used to get the URL path of the page.</param>
        /// <param name="noOfPagesLinks">Number of pagination numbers to show.</param>
        /// <param name="pageSize"></param>
        /// <returns></returns>
        public static string CreatePagination(int currentPage, int totalNumberOfRecords, HttpRequest pageRequest, int noOfPagesLinks = 5, int pageSize = 10)
        {
            StringBuilder paginationHtml = new StringBuilder();

            // Only render the pagination markup if the total number of records is more than our page size.
            if (totalNumberOfRecords > pageSize)
            {
                #region Pagination Calculations

                int amountOfPages = (int)(Math.Ceiling(totalNumberOfRecords / Convert.ToDecimal(pageSize)));

                int startPage = currentPage;

                if (startPage == 1 || startPage == 2 || amountOfPages < noOfPagesLinks)
                    startPage = 1;
                else
                    startPage -= 2;

                int maxPage = startPage + noOfPagesLinks;

                if (amountOfPages < maxPage)
                    maxPage = Convert.ToInt32(amountOfPages) + 1;

                if (maxPage - startPage != noOfPagesLinks && maxPage > noOfPagesLinks)
                    startPage = maxPage - noOfPagesLinks;

                int previousPage = currentPage - 1;
                if (previousPage < 1)
                    previousPage = 1;

                int nextPage = currentPage + 1;

                #endregion

                #region Get Current Path

                // Get current path.
                string path = pageRequest.Path.ToString();

                int pos = path.LastIndexOf("/") + 1;

                // Get last route value.
                string lastRouteValue = path.Substring(pos, path.Length - pos).ToLower();

                // Removes page number from end of path if path contains a page number.
                if (lastRouteValue.StartsWith("page"))
                    path = path.Substring(0, path.LastIndexOf('/'));

                #endregion

                paginationHtml.Append("<ul>");

                if (currentPage > 1)
                    paginationHtml.Append($"<li><a href=\"{path}/Page{previousPage}\"><span>Previous page</span></a></li>");

                for (int i = startPage; i < maxPage; i++)
                {
                    // If the current page equals one of the pagination numbers, set active state.
                    if (i == currentPage)
                        paginationHtml.Append($"<li><a href=\"{path}/Page{i}\" class=\"is-active\"><span>{i}</span></a></li>");
                    else
                        paginationHtml.Append($"<li><a href=\"{path}/Page{i}\"><span>{i}</span></a></li>");
                }

                if (startPage + noOfPagesLinks < amountOfPages && maxPage > noOfPagesLinks || currentPage < amountOfPages)
                    paginationHtml.Append($"<li><a href=\"{path}/Page{nextPage}\"><span>Next page</span></a></li>");

                paginationHtml.Append("</ul>");

                return paginationHtml.ToString();
            }
            else
            {
                return string.Empty;
            }
        }
    }
}

The PaginationHelper.CreatePagination() method can then be used inside a controller where you would like to list your data as well as render the pagination. A simple example of this would be as follows:

/// <summary>
/// List all news articles.
/// </summary>
/// <param name="page"></param> 
/// <param name="pageSize"></param>
/// <returns></returns>
[Route("/Articles")]
[Route("/Articles/Page{page}")]
public ActionResult Index(int page = 1, int pageSize = 10)
{
    // Number of articles to skip.
    int skip = 0;
    if (page != 1)
        skip = (page - 1) * pageSize;

    // Get list of articles from my datasource.
    List<NewsArticle> articles = MyData.GetArticles().Skip(skip).Take(pageSize).ToList();

    //Render Pagination.
    ViewBag.PaginationHtml = PaginationHelper.CreatePagination(page, articles.Count, HttpContext.Request, pageSize: pageSize);

    return View(articles);
}

The pagination will be output to a ViewBag that can be called from within your view. I could have gone down a different route and developed Partial View along with the appropriate model. But for my use the method approach offers most flexibility, as I could have the option to either use this from within a controller or view.

Recovering A Hard Disk Full of Memories - Part 1

Posted in: Surinder's Log

Memories are what give life purpose. They allow us to go back to the past, into a time that shall forever be stateless. Most importantly memories are experiences that mould us into the person we are today.

For some reason, when I think about the word memories the first thing that come to mind are pictures... Photos to be exact. I only started thinking how important photos are to me whilst I was having a conversation with my cousin Tajesh. Tajesh popped over this weekend gone by and like always has many fascinating stories to tell. One story in particular got my attention. He told me about an amazing trip he had in Australia many many years ago and how he lost all the photos he had taken after recently damaging his hard drive. On hearing his predicament, I was profoundly moved and imagined how I'd feel if I was in his position.

Even though our brains are wired to remember events and experiences, memories seem to somehow fade away over time and we start forgetting the little detail of images until it forms into a hazy recall. We remember enough to transport us back to a time or a place, but the brain has a strange way of patching together what we once saw. As if they are pieces of a larger puzzle. If your brain is anything like mine where you can only selectively retrieve one piece of the puzzle that is most meaningful, we're missing a vast array of information.

I decided I'd make an attempt to try and recover my cousins lost photos. He handed over his Western Digital Caviar edition hard drive carefully enclosed in an old VHS box, entrusting I'll have it's best interests at heart and keeping whatever memories that maybe locked away safe inside... A damaged hard drive is in some ways like our brains selective recall. The data is stored somewhere but we sometimes have problems accessing them.

I'm no hard disk recovery expert and I am hoping some off the shelf software will help me in getting at least some photos back from his holiday. So what's the game plan?

I'll start with using a piece of software I blogged about back in 2011 - EaseUs. EaseUs provides a line of software ranging from backup to recovery. It helped me then and (fingers crossed) it'll help me now. I'll also need a 3.5 inch disk caddy to allow the hard drive to be connected via USB and start the recovery process.

As it stands, my cousins Western Digital Caviar disk doesn't seem to have any visible damage and there are no noises when run. It just doesn't boot.

Stay tuned for future posts on how I get on.

To be continued...

My Time At Melia Bali Hotel

Posted in: Surinder's Log

Family. Family is what comes to mind when I think of my time at Melia Bali. I only happen to come to this conclusion as I checked out on the calm and (strangely) cool evening before having to depart back to the UK.

I don't generally write about my travels (or lack of!). But my time at Melia has energised me to write something and as a result, I scribble away madly trying to make sense of processing my erratic thoughts and feelings during my long flight back. Just so I can write this post.

Melia Bali Beach

I find it ironic I started writing this post about "family", when I happened to visit Bali with people to whom I deem most dear: mum, dad and sister. I feel that family forms centre place to the services they provide.

If you happen to have the privilege of staying at the Melia Bali Hotel for long enough, you'd get a sense of familiarity of the people working there. To some extent I hope I am perhaps familiar to them - The Indian guy with the ridiculously frizzy hair (a result of the climate ;-) ).

These people truly are the back bone of the hotel and make it what it is. Yes, Melia is a pretty place on the surface but it's the people that make it truly shine when compared to the other hotels staggered along the beach shoreline. They are without a doubt amazing at what they do. From the lady who cooks me the most delicious fluffy omelette in the morning down to the lobby personnel who are willing to help with any query or concern.

From the moment I wake up and make my way to the breakfast hall to the moment I enter the lobby at the end of a long day gallivanting, I am greeted with many smiles. You can't help but be infected with a sense of positivity and happiness, something I don't think I've ever come across when holidaying elsewhere.

The lobby statues and wondrous ceiling mural makes for a welcome sight at any time, setting the ambience and standard for the hotel. If you're lucky to be at the lobby during the evening, you'd be greeted by two classically trained balinese dancers dressed from an era of time gone by. As they dance with delicate intricacy to the tune of the rindik, I am reminded of similarities when compared to classical Indian dances - a lost art and cultural heritage slowly eroding with time, making for a visceral experience and something you can't help but appreciate.

Melia Bali - Balinese Dancers

Melia offers around five restaurants to cater for the guests varying palettes - each with their own theme and cuisines. We found ourselves venturing outside to nearby restaurants as after a few nights as we found the prices a little dear based on the portion sizes of the main meals. Even though the food was very tasty, my western belly expected something more sizeable. When taking into consideration the 21% combined tax and service charge on top of the prices on the menu - not so cheap. There are many fine eateries at the Bali Collection for consideration, just a 5-10 minute walk away.

When there are issues, it is family that are there to support at time of need. We happened to experience some very loud noises from the room above us at an unsightly hour. Spoiling the serenity we become accustomed to. Now this went on for a couple nights. We just happened to make a remark of our problem in passing to one of the workers whilst feasting on the morning breakfast buffet and within a a short period of time this was communicated to the customer service representative who apologised and organised a room change swiftly.

The rooms themselves are all very well maintained, clean and provided nice views from the balcony. Based on our room change, you can expect subtle differences in terms of the what the rooms offer. For example, our first room just had a shower, but the second room had a shower/bath. My only quibble is the shower head position - not a deal breaker. There are generous bathroom amenities, consisting of toothbrush, toothpaste, vanity kit, shampoo, conditioner, shower gel, shaving kit and body lotion. All fully restocked daily. The crazy thing is that you get a new toothbrush every day! As much as I like a new toothbrush, I sometimes have to question the environmental impact.

Melia Bali - View from Lobby

Even though our noisy neighbour was no fault of thier own, Melia took us under their wing to ensure our holiday was perfect. We were even given a fruit platter for our troubles. I think what struck a chord with me is before we departed, is that the management seem to know everything about a guests stay to a granular level. The hotel manager hoped we'd comeback again to visit even with the minor inconvenience we experienced.

If I do get another opportunity to visit again, I will consider paying a little extra for "The Level" experience. I have to admit, I was quite envious of all the things I heard about this upgrade when talking to other guests on the beach. I felt like a pauper. I like the idea of having a little more privacy in terms of accomodation and your own space on the beach. More importantly it's adults only. No noisy kids! :-)

Melia Bali - Entrance At Night

I look back at my time at Melia Bali with fond memories that will be permanently etched into my memory. For me, Melia compliments Bali as the wondrous land of Bali compliments Melia.