Blog

Blogging on programming and life in general.

  • Published on
    -
    1 min read

    It’ll Be A Sad Day When iGoogle Is No More

    Amongst the number of services Google provides, iGoogle portal has to be at the top of my list. It’s my one stop shop for daily news, weather forecasts and playing the odd game. I was surprised when Google announced they will discontinue the service from November 2013. I was reminded by the deadline on my iGoogle page today, reinforcing that this is going to happen. I was hoping Google would reconsider but it doesn’t look like that's going to happen.

    iGoogle Discontinued

    Google’s decision to discontinue iGoogle in my opinion is a little rash. They claim: “With modern apps that run on platforms likeChromeandAndroid, the need for something like iGoogle has eroded over time”. And this is where the problem lies. Why does everything nowadays have to evolve around an app? Some things are best left accessible through a browser.

    I like getting to work in the mornings and gazing over the days topics. It’s bloody informative! I’ve yet to find an app that matches what iGoogle offers. iGoogle is a one page where everything is displayed without having to click to another page. Google Chrome's substitutes require me to do exactly that. Big waste of time.

    I’m not the type of person to be concerned about change and in most cases I welcome it with open arms. But this will take a little time to get use to.

    Goodbye old friend, you’ll be sorely missed!

  • Safari iOS6It wasn’t until today I found that the Safari browser used on iPad and iPhone caches page functionality to such an extent that it stops the intended functionality. So much so, it affects the user experience. I think Apple has gone a step too far in making their browser uber efficient to minimise page loading times.

    We can accept browsers will cache style-sheets and client side scripts. But I never expected Safari to go as far as caching responses from web services. This is a big issue. So something as simple as the following will have issues in Safari:

    // JavaScript function calling web service
    function GetCustomerName(id)
    {
        var name = "";
    
        $.ajax({
            type: "POST",
            url: "/Internal/ShopService.asmx/GetCustomerName",
            data: "{ 'id' : '" + id + "' }",
            contentType: "application/json; charset=utf-8",
            dataType: "json",
            cache: false,
            success: function (result) {
                var data = result.d;
                name = data;
            },
            error: function () {
            },
            complete: function () {
            }
        });
        
        return name;
    }
    
    //ASP.NET Web Service method
    [WebMethod]
    public string GetCustomerName(int id)
    {
       return CustomerHelper.GetFullName(id);
    }
    

    In the past to ensure my jQuery AJAX requests were not cached, the “cache: false” option within the AJAX call normally sufficed. Not if you’re making POST web service requests. It’s only until recently I found using “cache:false” option will not have an affect on POST requests, as stated on jQuery API:

    Pages fetched with POST are never cached, so the cache and ifModified options in jQuery.ajaxSetup() have no effect on these requests.

    In addition to trying to fix the problem by using the jQuery AJAX cache option, I implemented practical techniques covered by the tutorial: How to stop caching with jQuery and JavaScript.

    Luckily, I found an informative StackOverflow post by someone who experienced the exact same issue a few days ago. It looks like the exact same caching bug is still prevalent in Apple’s newest operating system, iOS6*. Well you didn’t expect Apple to fix important problems like these now would you (referring to Map’s fiasco!). The StackOverflow poster found a suitable workaround by passing a timestamp to the web service method being called, as so (modifying code above):

    // JavaScript function calling web service with time stamp addition
    function GetCustomerName(id)
    {
        var timestamp = new Date();
    
        var name = "";
    
        $.ajax({
            type: "POST",
            url: "/Internal/ShopService.asmx/GetCustomerName",
            data: "{ 'id' : '" + id + "', 'timestamp' : '" + timestamp.getTime() + "' }", //Timestamp parameter added.
            contentType: "application/json; charset=utf-8",
            dataType: "json",
            cache: false,
            success: function (result) {
                var data = result.d;
                name = data;
            },
            error: function () {
            },
            complete: function () {
            }
        });
        
        return name;
    }
    
    //ASP.NET Web Service method with time stamp parameter
    [WebMethod]
    public string GetCustomerName(int id, string timestamp)
    {
        string iOSTime = timestamp;
        return CustomerHelper.GetFullName(id);
    }
    

    The timestamp parameter doesn’t need to do anything once passed to web service. This will ensure every call to the web service will never be cached.

    *UPDATE: After further testing it looks like only iOS6 contains the AJAX caching bug.

  • Working in the web industry and having the opportunity to develop a wide variety of websites, I like to take a snap-shot of a few pages for my portfolio (working on that!). But I generally come into issues when taking a screen-shot of a very long webpage. In fact, I always experience issues when screen grabbing a scrolling page.

    Luckily, I found a really useful add-on to Firefox called Fireshot. Fireshot makes it really easy to screenshot an entire page. Once you have made a screenshot, you can carry out the following tasks within the comfort of your browser:

    • Upload to Facebook, Picasa, Flickr.
    • Saved to disk as PDF/PNG/GIF/JPEG/BMP
    • Sent to clipboard
    • Print
    • E-Mail
    • Export

    I was expecting this tool generate a screen grab really slowly. Even on long pages with a lot of content, images are generated quickly. Take a look at the screen-shot I made of "http://www.theverge.com" here.

    Definitely try it out.

  • I’ve been working on a .NET library to retrieve all images from a users Twitpic account. I thought it would be quite a useful .NET library to have since there have been some users requesting one (including me) on some websites and forums.

    I will note that this is NOT a completely functioning Twitpic library that makes use of all API requests that have been listed on Twitpic’s developer site. Currently, the library only contains core integration on returning information of a specified user (users/show), enough to create a nice picture gallery.

    My Twitpic .NET library will return the following information:

    • ID
    • Twitter ID
    • Location
    • Website
    • Biography
    • Avatar URL
    • Image Timestamp
    • Photo Count
    • Images

    Code Example:

    private void PopulateGallery()
    {
        var hasMoreRecords = false;
    
        //Twitpic.Get(<username>, <page-number>)
        TwitpicUser tu = Twitpic.Get("sbhomra", 1);
    
        if (tu != null)
        {
            if (tu.PhotoCount > 20)
                hasMoreRecords = true;
    
            if (tu.Images != null && tu.Images.Count > 0)
            {
                //Bind Images to Repeater
                TwitPicImages.DataSource = tu.Images;
                TwitPicImages.DataBind();
            }
            else
            {
                TwitPicImages.Visible = false;
            }
        }
        else
        {
            TwitPicImages.Visible = false;
        }
    }
    

    From using the code above as a basis, I managed to create a simple Photo Gallery of my own: /Photos.aspx

    If you experience any errors or issues, please leave a comment.

    Download: iSurinder.TwitPic.zip (5.15 kb)

  • Published on
    -
    1 min read

    HTTP Request Script

    In one of my website builds, I needed to output around a couple thousand records from a database permanently into the .NET cache. Even though I set the cache to never expire, it will get cleared whenever the application pool recycles (currently set to every 24 hours). As you can expect, if a user happens to visit the site soon after the cache is cleared, excess page loading times will be experienced.

    The only way I could avoid this from happening is by setting up a Scheduled Task that would run a script that would carry out a web request straight after the application pool was set to recycle.

    Luckily, I managed to find a PowerShell script on StackOverflow that will do exactly that:

    $request = [System.Net.WebRequest]::Create("")
    $response = $request.GetResponse()
    $response.Close()
    
  • I don’t generally have a problem importing an Excel spread sheet into one of my SQL Server tables. But today would end my run of Excel importing perfection.

    I experienced an problem where all rows that only contained numbers were ending up as NULL in my table after import, which I thought was strange since the Excel spread sheet did not contain empty cells. It contained a mixture of data formats: text and numbers.

    I decided to format all rows in my spread sheet to text and try another re-import. No change.

    After much experimentation, the solution was to copy all columns and paste them into Notepad in order to remove all formatting inherited from Excel. I then re-copied all my data from Notepad back into my spread sheet and carried out another import. Lo and behold it worked!

    I don’t understand why I had this problem. It could have been due to the fact the spread sheet contained cells of different data formats and causing confusing through the import process.

  • Back in 2007 I started blogging mainly for one selfish reason - to have an online repository of how I've approached things technically to refer back to when required. When I find things interesting, I like to document them for me to expand on later. If a public user wants to expand or contribute to what I’ve posted, then they are welcome to do it.

    Blogging soon flourished into something more beneficial and pushed me to better myself in all aspects of web & application development. It had turned me from being a very introverted cowboy-developer to an extrovert with the confidence to push the boundaries in my day to day job just so I could have a reason to blog about it and publicly display what I know.

    I highly recommend blogging to anyone, especially in the technical industry. Reading other blogs has shown me that a solution to a problem is always up for interpretation. For example, I may find the solution to one of my issues on another site that I can expand further on my own blog (with references to the original author, of course).

    This year, I decided to take things one step further and joined a well known open community called StackOverflow. So far, it's been a great experience and I recently broke the 1000 points barrier. It took a lot of blood, sweat and tears. In some ways, knowing how people rate your answers in a forum can help show you where your skill set is lacking. I'm sure if I look back on some of my earlier posts I've made some shockingly bad suggestions. Thankfully, there are more experienced posters who set you on the right direction.

    StackOverflow Profile - sbhomra

    Blogging and contributing to StackOverflow can also have an unexpected impact - employment. The web development industry is very competitive and it's up to you to set yourself apart from the rest. Potential employers can have a great insight to what you're capable of and demonstrates you can communicate your technical knowledge.

    If I known this earlier in my career, I'm sure things would've been different and would have had the opportunity to find a job in web development sooner. So start early even if you're studying at college or university. When the time comes to getting a job, you can truly show your potential!

  • I had around 2000 webpage URL’s listed in a text file that needed to be generated into a simple Google sitemap.

    I decided to create a quick Google Sitemap generator console application fit for purpose. The program iterates through each line of a text file and parses it to a XmlTextWriter to create the required XML format.

    Feel free to copy and make modifications to the code below.

    Code:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.IO;
    using System.Xml;
    
    namespace GoogleSitemapGenerator
    {
        class Program
        {
            static void Main(string[] args)
            {
                string textFileLocation = String.Empty;
    
                if (args != null && args.Length > 0)
                {
                    textFileLocation = args[0];
                }
    
                if (!String.IsNullOrEmpty(textFileLocation))
                {
                    string fullSitemapPath = String.Format("{0}sitemap.xml", GetCurrentFileDirectory(textFileLocation));
    
                    //Read text file
                    StreamReader sr = File.OpenText(textFileLocation);
    
                    using (XmlTextWriter xmlWriter = new XmlTextWriter(fullSitemapPath, Encoding.UTF8))
                    {
                        xmlWriter.WriteStartDocument();
                        xmlWriter.WriteStartElement("urlset");
                        xmlWriter.WriteAttributeString("xmlns", "http://www.sitemaps.org/schemas/sitemap/0.9");
    
                        while (!sr.EndOfStream)
                        {
                            string currentLine = sr.ReadLine();
    
                            if (!String.IsNullOrEmpty(currentLine))
                            {
                                xmlWriter.WriteStartElement("url");
                                xmlWriter.WriteElementString("loc", currentLine);
                                xmlWriter.WriteElementString("lastmod", DateTime.Now.ToString("yyyy-MM-dd"));
                                //xmlWriter.WriteElementString("changefreq", "weekly");
                                //xmlWriter.WriteElementString("priority", "1.0");
    
                                xmlWriter.WriteEndElement();
                            }
                        }
    
                        xmlWriter.WriteEndElement();
                        xmlWriter.WriteEndDocument();
                        xmlWriter.Flush();
    
                        if (File.Exists(fullSitemapPath))
                            Console.Write("Sitemap successfully created at: {0}", fullSitemapPath);
                        else
                            Console.Write("Sitemap has not been generated. Please check your text file for any problems.");
    
                    }
                }
                else
                {
                    Console.Write("Please enter the full path to where the text file is situated.");
                }
            }
    
            static string GetCurrentFileDirectory(string path)
            {
                string[] pathArr = path.Split('\\');
    
                string newPath = String.Empty;
    
                for (int i = 0; i < pathArr.Length - 1; i++)
                {
                    newPath += pathArr[i] + "\\";
                }
    
                return newPath;
            }
        }
    }
    

    I will be uploading a the console application project including the executable shortly.

  • Today I came across this really interesting tweet on my Twitter timeline today:

    Read about why we’re deleting our Facebook page: facebook.com/limitedpressin… — Limited Run (@limitedrun) July 30, 2012

    Limited Run, posted on their Facebook profile stating that they would be deleting their account due to the amount Facebook is charging for clicks on their advertising. Here’s the interesting part: About 80% of the clicks Facebook charged Limited Run, JavaScript wasn't on. And if the person clicking the ad doesn't have JavaScript, it's very difficult for an analytics service to verify the click. Only 1-2% of people going to their site have JavaScript disabled, not 80% like the clicks coming from Facebook.

    Interesting stuff.

    Before Limited Run takes down their Facebook profile, I’ve attached a screenshot of their post below:

    Limited Pressing Facebook Post

    Reading this post today reminded me on a news article I read on “virtual likes” and how advertising through Facebook doesn’t necessarily mean you’ll be any better off. It all comes down to the level of engagement user’s have with a profile page. If users are just liking the page and not interacting with your posts or general content, those likes are worth nothing. Some companies are wising up to the effectiveness of Facebook’s advertising strategy.

    Limited Run isn’t the first to ditch Facebook ad’s, General Motor’s pulled away from Facebook ad’s earlier this year due to the ad’s Facebook produce do not have the visual impact needed to justify the cost.

    I think certain aspects of Facebook is a joke filled mostly of people looking for attention, not an effective marketing tool.

  • Facebook ConnectIf I need to login and authenticate a Facebook user in my ASP.NET website, I either use the Facebook Connect's JavaScript library or SocialAuth.NET. Even though these two methods are sufficient for the purpose, I don't think it's the most ideal or efficient way.

    The Facebook Connect JavaScript library is quite basic and doesn't have the flexibility required for full .NET integration through FormsAuthentication. Whereas SocialAuth.NET provides full .NET integration and all authentication is done server-side with minimal development.

    I'd say if you are looking for a straight-forward way to integrate social site authentication, SocialAuth.NET is the way to go. It's API can communicate with other social sites such as Twitter, LinkedIn and Gmail.

    Recently, I found a better and more efficient way to authenticate Facebook users on my site using Graph API and Hammock.

    Hammock is a C# a REST library for .NET that greatly simplifies consuming and wrapping RESTful services. This allows us to embrace the social site’s core technology instead of using varied SDK's or API's. There are many community driven frameworks and API's readily available on the Internet, but they can really cause problems if they evolve too quickly or haven’t been thoroughly tested.

    Suddenelfilio, has written a useful blog post on connecting Facebook using Hammock. You will see by his example that you can interact with Facebook anyway you want.

    The same principle could also be applied to other website API's that use REST based services, such as Twitter.