Blog

Posts written in August 2012.

  • Published on
    -
    1 min read

    .NET Library To Retrieve Twitpic Images

    I’ve been working on a .NET library to retrieve all images from a users Twitpic account. I thought it would be quite a useful .NET library to have since there have been some users requesting one (including me) on some websites and forums.

    I will note that this is NOT a completely functioning Twitpic library that makes use of all API requests that have been listed on Twitpic’s developer site. Currently, the library only contains core integration on returning information of a specified user (users/show), enough to create a nice picture gallery.

    My Twitpic .NET library will return the following information:

    • ID
    • Twitter ID
    • Location
    • Website
    • Biography
    • Avatar URL
    • Image Timestamp
    • Photo Count
    • Images

    Code Example:

    private void PopulateGallery()
    {
        var hasMoreRecords = false;
    
        //Twitpic.Get(<username>, <page-number>)
        TwitpicUser tu = Twitpic.Get("sbhomra", 1);
    
        if (tu != null)
        {
            if (tu.PhotoCount > 20)
                hasMoreRecords = true;
    
            if (tu.Images != null && tu.Images.Count > 0)
            {
                //Bind Images to Repeater
                TwitPicImages.DataSource = tu.Images;
                TwitPicImages.DataBind();
            }
            else
            {
                TwitPicImages.Visible = false;
            }
        }
        else
        {
            TwitPicImages.Visible = false;
        }
    }
    

    From using the code above as a basis, I managed to create a simple Photo Gallery of my own: /Photos.aspx

    If you experience any errors or issues, please leave a comment.

    Download: iSurinder.TwitPic.zip (5.15 kb)

  • Published on
    -
    1 min read

    HTTP Request Script

    In one of my website builds, I needed to output around a couple thousand records from a database permanently into the .NET cache. Even though I set the cache to never expire, it will get cleared whenever the application pool recycles (currently set to every 24 hours). As you can expect, if a user happens to visit the site soon after the cache is cleared, excess page loading times will be experienced.

    The only way I could avoid this from happening is by setting up a Scheduled Task that would run a script that would carry out a web request straight after the application pool was set to recycle.

    Luckily, I managed to find a PowerShell script on StackOverflow that will do exactly that:

    $request = [System.Net.WebRequest]::Create("")
    $response = $request.GetResponse()
    $response.Close()
    
  • I don’t generally have a problem importing an Excel spread sheet into one of my SQL Server tables. But today would end my run of Excel importing perfection.

    I experienced an problem where all rows that only contained numbers were ending up as NULL in my table after import, which I thought was strange since the Excel spread sheet did not contain empty cells. It contained a mixture of data formats: text and numbers.

    I decided to format all rows in my spread sheet to text and try another re-import. No change.

    After much experimentation, the solution was to copy all columns and paste them into Notepad in order to remove all formatting inherited from Excel. I then re-copied all my data from Notepad back into my spread sheet and carried out another import. Lo and behold it worked!

    I don’t understand why I had this problem. It could have been due to the fact the spread sheet contained cells of different data formats and causing confusing through the import process.

  • Back in 2007 I started blogging mainly for one selfish reason - to have an online repository of how I've approached things technically to refer back to when required. When I find things interesting, I like to document them for me to expand on later. If a public user wants to expand or contribute to what I’ve posted, then they are welcome to do it.

    Blogging soon flourished into something more beneficial and pushed me to better myself in all aspects of web & application development. It had turned me from being a very introverted cowboy-developer to an extrovert with the confidence to push the boundaries in my day to day job just so I could have a reason to blog about it and publicly display what I know.

    I highly recommend blogging to anyone, especially in the technical industry. Reading other blogs has shown me that a solution to a problem is always up for interpretation. For example, I may find the solution to one of my issues on another site that I can expand further on my own blog (with references to the original author, of course).

    This year, I decided to take things one step further and joined a well known open community called StackOverflow. So far, it's been a great experience and I recently broke the 1000 points barrier. It took a lot of blood, sweat and tears. In some ways, knowing how people rate your answers in a forum can help show you where your skill set is lacking. I'm sure if I look back on some of my earlier posts I've made some shockingly bad suggestions. Thankfully, there are more experienced posters who set you on the right direction.

    StackOverflow Profile - sbhomra

    Blogging and contributing to StackOverflow can also have an unexpected impact - employment. The web development industry is very competitive and it's up to you to set yourself apart from the rest. Potential employers can have a great insight to what you're capable of and demonstrates you can communicate your technical knowledge.

    If I known this earlier in my career, I'm sure things would've been different and would have had the opportunity to find a job in web development sooner. So start early even if you're studying at college or university. When the time comes to getting a job, you can truly show your potential!