Blog

Blogging on programming and life in general.

  • Working in the web industry and having the opportunity to develop a wide variety of websites, I like to take a snap-shot of a few pages for my portfolio (working on that!). But I generally come into issues when taking a screen-shot of a very long webpage. In fact, I always experience issues when screen grabbing a scrolling page.

    Luckily, I found a really useful add-on to Firefox called Fireshot. Fireshot makes it really easy to screenshot an entire page. Once you have made a screenshot, you can carry out the following tasks within the comfort of your browser:

    • Upload to Facebook, Picasa, Flickr.
    • Saved to disk as PDF/PNG/GIF/JPEG/BMP
    • Sent to clipboard
    • Print
    • E-Mail
    • Export

    I was expecting this tool generate a screen grab really slowly. Even on long pages with a lot of content, images are generated quickly. Take a look at the screen-shot I made of "http://www.theverge.com" here.

    Definitely try it out.

  • I’ve been working on a .NET library to retrieve all images from a users Twitpic account. I thought it would be quite a useful .NET library to have since there have been some users requesting one (including me) on some websites and forums.

    I will note that this is NOT a completely functioning Twitpic library that makes use of all API requests that have been listed on Twitpic’s developer site. Currently, the library only contains core integration on returning information of a specified user (users/show), enough to create a nice picture gallery.

    My Twitpic .NET library will return the following information:

    • ID
    • Twitter ID
    • Location
    • Website
    • Biography
    • Avatar URL
    • Image Timestamp
    • Photo Count
    • Images

    Code Example:

    private void PopulateGallery()
    {
        var hasMoreRecords = false;
    
        //Twitpic.Get(<username>, <page-number>)
        TwitpicUser tu = Twitpic.Get("sbhomra", 1);
    
        if (tu != null)
        {
            if (tu.PhotoCount > 20)
                hasMoreRecords = true;
    
            if (tu.Images != null && tu.Images.Count > 0)
            {
                //Bind Images to Repeater
                TwitPicImages.DataSource = tu.Images;
                TwitPicImages.DataBind();
            }
            else
            {
                TwitPicImages.Visible = false;
            }
        }
        else
        {
            TwitPicImages.Visible = false;
        }
    }
    

    From using the code above as a basis, I managed to create a simple Photo Gallery of my own: /Photos.aspx

    If you experience any errors or issues, please leave a comment.

    Download: iSurinder.TwitPic.zip (5.15 kb)

  • Published on
    -
    1 min read

    HTTP Request Script

    In one of my website builds, I needed to output around a couple thousand records from a database permanently into the .NET cache. Even though I set the cache to never expire, it will get cleared whenever the application pool recycles (currently set to every 24 hours). As you can expect, if a user happens to visit the site soon after the cache is cleared, excess page loading times will be experienced.

    The only way I could avoid this from happening is by setting up a Scheduled Task that would run a script that would carry out a web request straight after the application pool was set to recycle.

    Luckily, I managed to find a PowerShell script on StackOverflow that will do exactly that:

    $request = [System.Net.WebRequest]::Create("")
    $response = $request.GetResponse()
    $response.Close()
    
  • I don’t generally have a problem importing an Excel spread sheet into one of my SQL Server tables. But today would end my run of Excel importing perfection.

    I experienced an problem where all rows that only contained numbers were ending up as NULL in my table after import, which I thought was strange since the Excel spread sheet did not contain empty cells. It contained a mixture of data formats: text and numbers.

    I decided to format all rows in my spread sheet to text and try another re-import. No change.

    After much experimentation, the solution was to copy all columns and paste them into Notepad in order to remove all formatting inherited from Excel. I then re-copied all my data from Notepad back into my spread sheet and carried out another import. Lo and behold it worked!

    I don’t understand why I had this problem. It could have been due to the fact the spread sheet contained cells of different data formats and causing confusing through the import process.

  • Back in 2007 I started blogging mainly for one selfish reason - to have an online repository of how I've approached things technically to refer back to when required. When I find things interesting, I like to document them for me to expand on later. If a public user wants to expand or contribute to what I’ve posted, then they are welcome to do it.

    Blogging soon flourished into something more beneficial and pushed me to better myself in all aspects of web & application development. It had turned me from being a very introverted cowboy-developer to an extrovert with the confidence to push the boundaries in my day to day job just so I could have a reason to blog about it and publicly display what I know.

    I highly recommend blogging to anyone, especially in the technical industry. Reading other blogs has shown me that a solution to a problem is always up for interpretation. For example, I may find the solution to one of my issues on another site that I can expand further on my own blog (with references to the original author, of course).

    This year, I decided to take things one step further and joined a well known open community called StackOverflow. So far, it's been a great experience and I recently broke the 1000 points barrier. It took a lot of blood, sweat and tears. In some ways, knowing how people rate your answers in a forum can help show you where your skill set is lacking. I'm sure if I look back on some of my earlier posts I've made some shockingly bad suggestions. Thankfully, there are more experienced posters who set you on the right direction.

    StackOverflow Profile - sbhomra

    Blogging and contributing to StackOverflow can also have an unexpected impact - employment. The web development industry is very competitive and it's up to you to set yourself apart from the rest. Potential employers can have a great insight to what you're capable of and demonstrates you can communicate your technical knowledge.

    If I known this earlier in my career, I'm sure things would've been different and would have had the opportunity to find a job in web development sooner. So start early even if you're studying at college or university. When the time comes to getting a job, you can truly show your potential!

  • I had around 2000 webpage URL’s listed in a text file that needed to be generated into a simple Google sitemap.

    I decided to create a quick Google Sitemap generator console application fit for purpose. The program iterates through each line of a text file and parses it to a XmlTextWriter to create the required XML format.

    Feel free to copy and make modifications to the code below.

    Code:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.IO;
    using System.Xml;
    
    namespace GoogleSitemapGenerator
    {
        class Program
        {
            static void Main(string[] args)
            {
                string textFileLocation = String.Empty;
    
                if (args != null && args.Length > 0)
                {
                    textFileLocation = args[0];
                }
    
                if (!String.IsNullOrEmpty(textFileLocation))
                {
                    string fullSitemapPath = String.Format("{0}sitemap.xml", GetCurrentFileDirectory(textFileLocation));
    
                    //Read text file
                    StreamReader sr = File.OpenText(textFileLocation);
    
                    using (XmlTextWriter xmlWriter = new XmlTextWriter(fullSitemapPath, Encoding.UTF8))
                    {
                        xmlWriter.WriteStartDocument();
                        xmlWriter.WriteStartElement("urlset");
                        xmlWriter.WriteAttributeString("xmlns", "http://www.sitemaps.org/schemas/sitemap/0.9");
    
                        while (!sr.EndOfStream)
                        {
                            string currentLine = sr.ReadLine();
    
                            if (!String.IsNullOrEmpty(currentLine))
                            {
                                xmlWriter.WriteStartElement("url");
                                xmlWriter.WriteElementString("loc", currentLine);
                                xmlWriter.WriteElementString("lastmod", DateTime.Now.ToString("yyyy-MM-dd"));
                                //xmlWriter.WriteElementString("changefreq", "weekly");
                                //xmlWriter.WriteElementString("priority", "1.0");
    
                                xmlWriter.WriteEndElement();
                            }
                        }
    
                        xmlWriter.WriteEndElement();
                        xmlWriter.WriteEndDocument();
                        xmlWriter.Flush();
    
                        if (File.Exists(fullSitemapPath))
                            Console.Write("Sitemap successfully created at: {0}", fullSitemapPath);
                        else
                            Console.Write("Sitemap has not been generated. Please check your text file for any problems.");
    
                    }
                }
                else
                {
                    Console.Write("Please enter the full path to where the text file is situated.");
                }
            }
    
            static string GetCurrentFileDirectory(string path)
            {
                string[] pathArr = path.Split('\\');
    
                string newPath = String.Empty;
    
                for (int i = 0; i < pathArr.Length - 1; i++)
                {
                    newPath += pathArr[i] + "\\";
                }
    
                return newPath;
            }
        }
    }
    

    I will be uploading a the console application project including the executable shortly.

  • Today I came across this really interesting tweet on my Twitter timeline today:

    Read about why we’re deleting our Facebook page: facebook.com/limitedpressin… — Limited Run (@limitedrun) July 30, 2012

    Limited Run, posted on their Facebook profile stating that they would be deleting their account due to the amount Facebook is charging for clicks on their advertising. Here’s the interesting part: About 80% of the clicks Facebook charged Limited Run, JavaScript wasn't on. And if the person clicking the ad doesn't have JavaScript, it's very difficult for an analytics service to verify the click. Only 1-2% of people going to their site have JavaScript disabled, not 80% like the clicks coming from Facebook.

    Interesting stuff.

    Before Limited Run takes down their Facebook profile, I’ve attached a screenshot of their post below:

    Limited Pressing Facebook Post

    Reading this post today reminded me on a news article I read on “virtual likes” and how advertising through Facebook doesn’t necessarily mean you’ll be any better off. It all comes down to the level of engagement user’s have with a profile page. If users are just liking the page and not interacting with your posts or general content, those likes are worth nothing. Some companies are wising up to the effectiveness of Facebook’s advertising strategy.

    Limited Run isn’t the first to ditch Facebook ad’s, General Motor’s pulled away from Facebook ad’s earlier this year due to the ad’s Facebook produce do not have the visual impact needed to justify the cost.

    I think certain aspects of Facebook is a joke filled mostly of people looking for attention, not an effective marketing tool.

  • Facebook ConnectIf I need to login and authenticate a Facebook user in my ASP.NET website, I either use the Facebook Connect's JavaScript library or SocialAuth.NET. Even though these two methods are sufficient for the purpose, I don't think it's the most ideal or efficient way.

    The Facebook Connect JavaScript library is quite basic and doesn't have the flexibility required for full .NET integration through FormsAuthentication. Whereas SocialAuth.NET provides full .NET integration and all authentication is done server-side with minimal development.

    I'd say if you are looking for a straight-forward way to integrate social site authentication, SocialAuth.NET is the way to go. It's API can communicate with other social sites such as Twitter, LinkedIn and Gmail.

    Recently, I found a better and more efficient way to authenticate Facebook users on my site using Graph API and Hammock.

    Hammock is a C# a REST library for .NET that greatly simplifies consuming and wrapping RESTful services. This allows us to embrace the social site’s core technology instead of using varied SDK's or API's. There are many community driven frameworks and API's readily available on the Internet, but they can really cause problems if they evolve too quickly or haven’t been thoroughly tested.

    Suddenelfilio, has written a useful blog post on connecting Facebook using Hammock. You will see by his example that you can interact with Facebook anyway you want.

    The same principle could also be applied to other website API's that use REST based services, such as Twitter.

  • I always found writing code to read an RSS feed within my .NET application very time-consuming and long-winded. My RSS code was always a combination of using WebRequest, WebResponse, Stream, XmlDocument, XmlNodeList and XmlNode. That’s a lot of classes just to read an RSS feed.

    Yesterday, I stumbled on an interesting piece of code on my favourite programming site StackOverflow.com, where someone asked how to parse an RSS feed in ASP.NET. The answer was surprisingly simple. RSS feeds can now be consumed using the System.ServiceModel.Syndication namespace in .NET 3.5 SP1. All you need is two lines of code:

    var reader = XmlReader.Create("http://mysite.com/feeds/serializedFeed.xml");
    var feed = SyndicationFeed.Load(reader);
    

    Here’s a full example on how we can iterate through through the SyndicationFeed class:

    public static List<BlogPost> Get(string rssFeedUrl)
    {
        var reader = XmlReader.Create(rssFeedUrl);
        var feed = SyndicationFeed.Load(reader);
    
        List<BlogPost> postList = new List<BlogPost>();
    
        //Loop through all items in the SyndicationFeed
        foreach (var i in feed.Items)
        {
            BlogPost bp = new BlogPost();
            bp.Title = i.Title.Text;
            bp.Body = i.Summary.Text;
            bp.Url = i.Links[0].Uri.OriginalString;
            postList.Add(bp);
        }
    
        return postList;
    }
    

    That’s too simple, especially when compared to the 70 lines of code I normally use to do the exact same thing.

  • Location HTTPEver since I decided to expand my online presence, I thought the best step would be to have a better domain name. My current domain name is around twenty-nine characters in length. Ouch! So I was determined to find another name that was shorter and easier to remember.

    Ever since “.me” top level domain (TLD) came out, I snapped up “surinder.me”, partly because all other domains with my first name were gone (you know who you are!) and the “.me” extension seemed to fulfil what I wanted my website to focus on. ME! Having said that, I would have loved to get a “.com” domain, but I guess that’s what happens when you enter the online world so late.

    I was ready to move over all my content to “surinder.me” until one on my techy friends told me that things are still undecided when it comes to “.me” TLD’s in general. Originally, the “.me” extension was assigned to Montenegro’s locale only. But it’s fast gained traction over the years due to it’s simplicity and wide range of possible domain names. Even companies such as Microsoft, Facebook, Wordpress and Samsung rushed to register their “.me” domains. Hence the reason why I decided to get one.

    Companies seem to be using “.me” extensions for either URL shortening services or redirects to partner sites with “.com” extensions. It doesn’t fill me with much confidence when “.me” extensions are used this way. Google’s software engineer, Matt Cutts wrote a reassuring post on his Google+ profile earlier this year by stating:

    “…regardless of the top-level domain (TLD). Google will attempt to rank new TLDs appropriately, but I don't expect a new TLD to get any kind of initial preference over .com…If you want to register an entirely new TLD for other reasons, that's your choice, but you shouldn't register a TLD in the mistaken belief that you'll get some sort of boost in search engine rankings.”

    This should put all my “.me” fears to rest…right? Well it’s nice to know Google won’t penalise a site based on an extension. In the world of web, a search optimised site is king (as it should be). It’s nice that Google have given “.me” (as a country extension) global status given the nature of how its been used of late. But if you check Google’s Geotargetable Domains article, the text in brackets worries me.

    Google’s Webmaster Tools Geotargetable Domains

    I get the feeling you can’t go wrong with a “.com” domain providing you can find something meaningful to your cause. Steps are being made in the right direction for gccTLD’s. For example, Webmaster Tools gives you the option to geographically target your “.me” site. However, I can’t find anything concrete to alleviate my concerns in the long-run.

    So where does this leave me? Well, we’ll just have to find out if my future domain contains a .me extension. Smile