Blog

Posts written in 2012.

  • Published on
    -
    2 min read

    Retrieve A Single YouTube Video in .NET

    Back in 2009 I wrote a simple web application to output all videos uploaded from a user’s channel. Luckily, hardly anything has changed. Now you only need to register for a Developer Key and state an Application Name. You are no longer required to provide a Client ID.

    This time round, I needed to output data onto my page from a single YouTube entry when a user pastes the URL of a YouTube video in one of my form fields.

    using System;
    using System.Linq;
    using System.Text;
    using Google.GData;
    using Google.YouTube;
    using Google.GData.Client;
    
    namespace MyProject.Helpers.Common
    {
        public class YouTubeHelper
        {
            private static string YouTubeDeveloperKey = WebConfigurationManager.AppSettings["YouTubeDeveloperKey"].ToString();
            private static string YouTubeAppName = WebConfigurationManager.AppSettings["YouTubeAppName"].ToString();
    
            //Get YouTube video
            public static Video YouTubeVideoEntry(string videoID)
            {
                YouTubeRequestSettings settings = new YouTubeRequestSettings(YouTubeAppName, YouTubeDeveloperKey);
                YouTubeRequest request = new YouTubeRequest(settings);
    
                //Link to the feed we wish to read from
                string feedUrl = String.Format("http://gdata.youtube.com/feeds/api/videos/{0}", videoID);
    
                Feed<Video> videoFeed = request.Get<Video>(new Uri(feedUrl));
    
                return videoFeed.Entries.SingleOrDefault();
            }
    
            //Extract the YouTube ID from the web address.
            public static string GetVideoID(string videoUrl)
            {
                Uri tempUri = new Uri(videoUrl); 
    
                string sQuery = tempUri.Query;
    
                return System.Web.HttpUtility.ParseQueryString(sQuery).Get("v");
            }
    
            //Get required YouTube video information
            public static YouTubeDetail GetVideoInformation(string url)
            {
                Video v = YouTubeVideoEntry(GetVideoID(url));
    
                //Pass required YouTube information to custom class called YouTubeDetail
                YouTubeDetail vDetail = new YouTubeDetail();
                vDetail.ID = v.VideoId;
                vDetail.Title = v.Title;
                vDetail.Description = v.Description;
    
                return vDetail;
            }
        }
    }
    

    Hopefully, my “YouTubeHelper” class is easy to follow. All you need to use is the “GetVideoInformation()” method by simply passing a page link to where your YouTube video resides. At the moment only full YouTube URL’s are accepted not the short URL (http://youtu.be/).

  • Ever since Google+ came along, I noticed website authors were getting their picture displayed next to article’s they’ve written in Google searches. Not to be left out of this trend, I decided I would attempt to get my ugly-mug displayed next to all my authored content as well.

    Having carried out almost all of Google’s requirements through minor HTML modifications and verifying my Google+ account is linked to this blog, it’s finally happened!

    Author information in search results

    You may find that it can take some time for authorship information to appear in search results. I carried out all necessary steps back in January 2012. So it’s taken a good 3 months to get picked up. I am sure times will vary depending on the popularity of your site and the number of authored content it contains.

    Here are the four basic things I did to get my mug-shot in Google’s search results:

    1. Make sure your Google+ profile has a recognisable headshot photo of high quality.
    2. Link your site to your Google+ account by adding a badge.
    3. Verify your Google+ account with an email address containing your domain address.
    4. Add a link to your site in the “Contributor” box in your Google+ profile.
  • I've been playing around with creating multilingual sites in Kentico (version 5.5R2). One of (the many) Kentico strengths is being able to configure an existing site installation to cater for multi-lingual support in a straight-forward manner.

    I came across a perplexing problem when trying to view both of my multilingual sites within the same browser. In my development environment I created two domain aliases and sites in IIS:

    1. Danish – http://172.16.1.28:8010
    2. British – http://172.16.1.28:8011

    As you can see, I differentiate between two of my site cultures by the port number.

    Kentico Domain Aliases

    I could only view one version of the site on both URL’s. This was all due to a cookie that gets created on first site visit and stores the “CMSPreferredCulture” based on the domain name excluding the port number. For those who aren’t aware, “CMSPreferredCulture” simply contains a localisation code for the site. You might be thinking: What’s the big deal?

    Kentico CMSPreferredCulture Cookie

    Well an issue only occurs when you are developing and testing the multi-lingual setup within a local environment and don’t have access to a range of unique domains. I thought that I could use the different port numbers in my environment to distinguish between the different site cultures.

    I was wrong. Kentico only uses the domain name and excludes any port  numbers. If you need to view different culture versions of your site at the same time, you will need to view them in different browsers.

    A small thing like this can cause some bewilderment for a Kentico novice. It sure bewildered me. You would expect to have the ability to view different versions of a site based on the domain aliases setup in Kentico.

    If anyone can suggest a work-around. Please leave a comment.

  • If you are storing images or files using a “Direct Uploader” field type within a document and you need to retrieve them in your code, you have two options to get this file back:

    1. Read up on the Kentico API (DocumentHelper.GetAttachment() methods).
    2. Use Kentico’s “GetFile.aspx” page to reference the file itself.

    As much as I would like to do things properly and familiarise myself with the Kentico API in greater detail, project time constraints can be a hindrance. In this case, I used the “GetFile.aspx” page in the following manner:

    <img src="/CMSPages/GetFile.aspx?guid=<Attachment GUID>" title="My Kentico Image" />
    

    The GUID for “Attachment GUID” will be found in the document where you use the “Direct Uploader” field.

    I don’t know if this is what you would necessarily call a hack. But it works!

  • Published on
    -
    1 min read

    Running Facebook Applications Locally

    Having the ability to run and develop Facebook applications within the comfort of a local environment is a must. Previously, I always thought in order to work on Facebook applications a public facing URL was required to allow Facebook to communicate with your application directly. Fortunately this is not the case.

    All you need to do is set up a server alias in your hosts file and use this alias as an “App Domain” within your Facebook Application settings.

    Quick Example

    I created a new site in Microsoft IIS called: “facebook.surinder-test.com” running under port 8008. Feel free to change the port number to your own choosing. To browse the site we need to add this web address to Windows host file (C:\Windows\System32\drivers\etc):

    # localhost name resolution is handled within DNS itself.
    #    127.0.0.1       localhost
    #    ::1             localhost
    
    127.0.0.1     facebook.surinder-test.com
    127.0.0.1     localhost
    

    This will route all your requests to ”facebook.surinder-test.com” to localhost.

    Next, make the following changes to your Facebook Application settings page:

    FB App Screen

  • Published on
    -
    1 min read

    Speed Up Firebug

    I noticed recently that Firebug was running very slow whilst inspecting elements or debugging client-side scripts. In the past when noticing performance issues in Firefox, a straight-forward opening and closing the browser normally resolved any issues.

    What I found after investigating this problem online is that Firebug keeps a record of all breakpoints and sites where Firebug was used. As you can imagine, this will accumulate over time. To remove all history, go to your Firefox profile directory which can be found here:

    C:\Users\<Windows User Account Name>\AppData\Roaming\Mozilla\Firefox\Profiles\<Firefox Profile ID>.default\firebug
    

    You will find two files within Firebug folder directory:

    • annotations.json - contains a history of website browsing.
    • breakpoints.json - contains currently set breakpoints.

    Close all running instances of Firefox and delete both files within the directory.

    Judging by posts from other users online, there could be other contributing factors to Firebug's sluggishness at times. Hopefully, by carrying out the steps above you will notice a difference.

  • Published on
    -
    1 min read

    Minimise and Obfuscate Your JavaScript Code

    We all know minimising our JavaScript files prior to moving a site into a production environment is best practise. The main reason why we do it is because compressed JavaScript files allow sites run faster at a lower bandwidth cost and (to some extent) make the code harder to understand.

    But what if we wanted to have the ability to render a JavaScript file completely unintelligible to prying eyes? This is the very question I asked myself prior to deploying a site I worked on to a live environment.

    Even though standard JavaScript minimisers remove comments, white space and use shorter variable names, we can take things a step further.

    I found a great site that manages to render your code into complete jibberish. You can give it a go by going to: http://javascriptobfuscator.com/.

    NOTE: Just as JavaScript code can be easily minimised, it can just as easily be “beautified” by going to: http://jsbeautifier.org/. Nevertheless, using the link above is a better deterrent when compared to other minimisers in my opinion.

  • Being a Web Developer and owning my own website, I’m quite interested in seeing how my site is doing when I am not in the vicinity of my computer – mainly analytics and advertising revenue. Even though Google Analytics and Adsense provides me with really good information, I was interested in seeing if there were any alternatives in app form on Android.

    To my surprise there were no official app alternatives for AdSense or Analytics. Thankfully, there are a few unofficial app's currently available to download free on the Android Market. Here are (in my opinion) the best ones:

    gAnalytics

    gAnalytics Screen Shot gAnalytics provides all the necessary types of stats you would require. Even though currently in beta, this hasn’t restricted the app in anyway and I haven’t experienced a single issue whilst using it. You have access to the following type of reporting (not a complete list):

    • Visitor stats –  pageviews, average time on site, bounce rate and new visitors.
    • Demographics – language, country and city.
    • System stats – browser, operating system, screen resolution and service provider.
    • Referral/Direct traffic.
    • Search Engine traffic – keyword and search query.
    • Content stats – page, search term and exit pages.

    If that wasn’t enough, gAnalytics lets you retrieve statistics from a given date period.

    To summarise, gAnalytics is a perfectly comprehensive and a brilliant all round app.

    AdSense Dashboard

    AdSense Dashboard ScreenshotI would say the AdSense offering in terms of reporting isn’t as comprehensive as gAnalytics. What AdSense Dashboard does do well is providing you with a heads-up view on all AdSense metrics, such as:

    • CPM
    • CTR
    • Page views
    • Clicks
    • Estimated revenue

    Unfortunately, you don’t get an option to view AdSense metrics over a specific date period. Only today, this month and year to date.

    AdSense Dashboard is a simple app giving you top-level on stats and revenue information over four different screens.

  • Published on
    -
    2 min read

    Stay out of trouble! Backup your files with RoboCopy

    robocopyApologies for making a reference from the social-satire/sci-fi film that is RoboCop (1987) in my post title. It just had to be done when talking about some tool called RoboCopy. For those who aren’t aware of what RoboCopy is, where have you been? In all honesty, I myself never heard of it until a few days ago.

    RoboCopy is a command-line run tool that allows you to copy files from one directory to another. One of its most popular uses for RoboCopy is it’s ability to copy large volumes of files quicker than carrying out a manual copy and paste through a GUI, making it ideal for backup jobs. So you could easily write a backup script to run via a Schedule Task on a daily basis.

    I managed to backup around 80Gb of files in less than an hour. What’s even more impressive is that I could run numerous RoboCopy scripts at the same time. Currently, I have only run two scripts simultaneously just to be on the safe side.

    Prior to RoboCopy, I was using another command-line tool: XCopy. For my backup purposes XCopy did exactly what I wanted it to do until I came across a misleading error message: “Insufficient memory”. You would think this message would mean the destination directory to where your files are copying to is full or not enough memory resources. In matter of fact this error only happens when the fully qualified file path is longer than 254 characters. Unfortunately, I couldn’t get around this error due to the nature of how my directories are structured. Luckily, RoboCopy doesn’t have this limitation.

    One of the major strength’s of RoboCopy is the number of useful options you have at your disposal. A few example’s are:

    • Moving files.
    • Exclude certain files and file types.
    • Detailed logging that tells you new the files that have been copied or over-written.
    • Parameterised scripting.

    Example Script

    @ECHO OFF
    
    ECHO PROCESSING BACKUP ...
    
    robocopy \\work\Projects\ F:\Projects\Backup\ /mir /sl
    /log:"F:\Logs\Projects-%date:/=%.log"
    
    ECHO BACKUP COMPLETED!
    

    The script I have provided is what I use to backup files through a Schedule Task that runs at the end of every day. This script mirrors the source drive exactly. So any files that have been deleted, updated or created will have the same effect on the backup drive. In addition, a log file is created when RoboCopy is running.

    More Information

  • Published on
    -
    1 min read

    Time for a new chapter in my online presence

    Moving blogAfter blogging under the “computing-studio.com” domain name for around 4 years, I think its time for a new chapter in my online presence. Last Friday I decided to buy a new domain name called http://surinder.me. At the time “computing-studio.com” domain seemed like a great idea where me and my fellow techy University friends would contribute. Unfortunately, things didn’t work out and decided to go it alone.

    From looking at the number of blog posts I have written (95 at the time of writing), you would be forgiven to make the assumption that I am not the most persistent blogger. I believe the domain has a part to play. After all, “computing-studio.com” somewhat limits what I can write and doesn’t really give me the freedom to talk about things outside my technical field.

    Even though I am a techy at heart (I guess being a web developer doesn’t help), I talk about other things non-code related through my Google+ and Twitter posts. I see having a new domain name is just the start. I am hoping to collate all my contributions from sites under the surinder.me address. So everything is about…well…me!

    All exciting stuff! I am not looking forward to implementing all the redirects and having to work my way up Google’s page rank again. But its something that has to be done.

    Watch this space!