Blog

Blogging on programming and life in general.

  • The “Remove Format” button (Remove Formatting Button) within FCKEditor, only removes valid inline elements such as: strong, span, strike, font, em, etc.

    If you want to be able to make the Remove Formatting function more flexible so that it removes block elements, you can do so by modifying the “fckconfig.js” file found within the FCKeditor folder.

    Search for the “FCKConfig.RemoveFormatTags” line, which will look something like this:

    FCKConfig.RemoveFormatTags = 'b,big,code,del,dfn,em,font,i,ins,kbd,q,samp,small,span,strike,strong,sub,sup,tt,u,var';
    

    All you need to do now is add any additional elements you wish to remove from your content. In my case, I wanted the Remove Formatting button to remove all header tags. So I carried out the following:

    FCKConfig.RemoveFormatTags = 'b,big,code,del,dfn,em,font,i,ins,kbd,q,samp,small,span,strike,strong,sub,sup,tt,u,var,h1,h2,h3';
    
  • I have been developing custom web parts and SharePoint customisations for a couple years now. During the early stages of SharePoint development I found a great deal of confusion when trying to retrieve information from different areas of an intranet through using the SPSite and SPWeb methods.

    I think any other developer who is starting out SharePoint development may encounter the same issue. What I found useful was the “Site Architecture and Object Model Overview” diagram from the MSDN site.

    SharePoint Site Architecture

    It nicely breaks down the architecture of a SharePoint site. I highly recommend that you take look at the following links containing more diagrams:

    Whilst I am talking about SPSite’s and SPWebs’s in this post, I’ll give you quick overview on how they work using the (above) diagram. Basically, the top level site collection (SPSite) contains your root web (SPWeb) and subsites (SPWebs under SPWebs). So, a subsite will be any site (SPWeb) under the top level web site in a site collection (SPSite).

  • In light of what has happened recently with some 150,000 Google Account holders loosing their information due to a mishap at Google HQ over the weekend really reinforces the fact that our data is not safe…even in the “cloud”.

    At the end of the day our information is stored on hardware that can fail. I think that this whole “cloud computing” malarkey has got all lured into a false sense of security where we think we don’t need to take measures to ensure our data backed up on a regular basis. I have to admit, I too have become a bit tardy when it comes to backing up my online data. If a large company like Google can get it wrong, what hope is there for other companies offering the same thing?

    I practically live on the “cloud” in terms of what Google has to offer. I use their email, calendar, document and notebook applications. Even their mobile phone OS: Android! Luckily, there are steps we can take to ensure our data is backed up on your own terms:

    Google Calendar Google Calendar

    Google Calendar is the one application I use the most. If I lost all my data, I would quite annoyed to say the least (and be very disorganised).

    You can backup all your calendar entries by opening your calendar settings, click on Calendars and select “Export Calendars”. A zip file will be created containing your calendars in a .ical format.
     
    GmailLogo Gmail

    This a simple one. Use an desktop email client such as Thunderbird (or any other client you prefer) to download all your emails directly to you computer through POP access.
     
    GoogleDocsLogo Docs

    If you only store a handful of documents in your Google Account, you could just download them one-by-one. Understandably, if you have a long list of documents a more automated approach is required.

    Lifehacker.com shows a really great script you can use to that allows you to download documents in whatever format you require. Take a look here.
     

    Hooray! Our data is saved!

  • In ASP.NET you would think when you use the “.Count” method that it would be able to simply return the total number of elements within a collection. In majority of cases this is right. Well, apart from when you use the “.Count” method against a collection of profiles within SharePoint. For example:

    UserProfileManager profileManager = new UserProfileManager(myContext);
    
    //Get total number of profiles
    int numberOfProfiles = profileManager.Count;
    

    I found that I came across two issues when using the code above:

    1. The incorrect number of profiles was returned.
    2. For some reason, when I deployed the code to a live server environment I kept on getting errors from the line where the count was being returned.

    From researching this issue on various blog posts and forums, it seems that UserProfileManager.Count does indeed have issues in returning the count correctly. The only way to get around this is to enumerate through the UserProfileManager:

    UserProfileManager profileManager = new UserProfileManager(myContext);
    
    int counter = 0;
    
    IEnumerator profileEnumerator = profileManager.GetEnumerator();
    while (profileEnumerator.MoveNext())
    {
        counter++;
    }
    
    //Number of profiles
    int totalNumberOfProfiles = counter;
    

    This will give us an accurate number of profiles that are stored within SharePoint and without any silly errors.

  • I have been building a custom .NET web part page to use in my SharePoint intranet. The .NET page has quite a lot of custom HTML and jQuery design elements, so using CSS and JavaScript files were essential.

    As you know, when we want to use elements from our CSS and JavaScript files we normally add the following lines of HTML at the top of our page:

    <!-- CSS -->
    <link type="text/css" rel="stylesheet" href="site.css" />
    
    <!-- JavaScript -->
    <script src="jQuery.js" type="text/javascript" />
    

    If you added those lines of code in a custom SharePoint page, you’ll find that the page will ignore them. Thankfully, SharePoint has given us some controls to add these references.

    At this point its worth stating that I stored all my required JavaScript and CSS files within the “Style Library” directory situated in the root of any SharePoint 2010 intranet. In order to get these files I used controls called”CssRegistration” and “ScriptLink”:

    <!-- CSS -->
    <SharePoint:CssRegistration ID="CssRegistration1" Name="/Style Library/Home/CSS/jcarousel.css" runat="server" After="corev4.css" />
    
    <!-- JavaScript -->
    <SharePoint:ScriptLink ID="ScriptLink1" Name="~sitecollection/Style Library/Home/JS/jquery-1.4.4.min.js" runat="server" />
    

    If you have stored your CSS and JavaScript within the physical file directory situated in the 14 hive folder, you will need to modify the above example to the following:

    <!-- CSS -->
    <SharePoint:CSSRegistration Name="<% $SPUrl:~SiteCollection/Style Library/Core Styles/jcarousel.css%>" runat="server"/>
    
    <!-- JavaScript -->
    <SharePoint:ScriptLink ID="ScriptLink1" Name="<% $SPUrl:~SiteCollection/Style Library/Core Styles/jquery.js%>" runat="server" />
    

    The only difference between this example and our earlier example is that when we have just added “SPUrl” to get files relative to the current site collection.

  • I am writing a custom webpart that will output user profile information from SharePoint 2010. My code requires me to get quite a few fields. Most of these fields are not “intellisensable” and cannot be accessed directly without having to manually enter the field name, as you can see from my code snippet below.

    User Profile Properties Code

    But its really easy to get the user field properties incorrect. A good example, is retrieving the office location. You would think the property name would be called “OfficeLocation” but its actually called “SPS-Location”.

    Luckily SharePoint allows us to view and access all the user profile properties we require and even create our own custom fields.

    Lets start by opening Central Administration and navigate to Manage Service Applications > User Profile Service Application, which will take you to the following page:

    User Profile Service Application

    Click on “Manage User Properties” to view a list of all user field properties SharePoint uses. To either rename the display name or view the actual property name, click on a field and press “Edit”.

    Manage User Properties

    The “Name” field (as highlighted below) is not editable and for a very good reason too! These are the property ID’s that we will call when wanting to retrieve their value. All default field names are not editable.

    Edit User Property

    As I stated earlier, you can create your own properties and call them whatever you want. But SharePoint already provides us with so many out-of-the-box, you probably won’t need to create anymore anytime soon.

  • In my last post, I showed you how to create an Enterprise Search page that consisted of both “Site” and “People” searches. Depending on how you have setup your search within Central Administration, you may find the “People” search not returning any results.

    Before we start, there are a few things you need to check. Firstly, ensure you have the necessary search services in working order. If you can carry out site searches you should be fine. Secondly, ensure the User Profile service has been setup sufficiently so that features such as MySites and Profile databases are working.

    In a straight-forward world, you would think that completing the steps above would be enough for SharePoint 2010 to allow you to search users within your site. But sadly we don’t live in a straight-forward world.

    Open Central Administration and navigate to “Manage Service Applications”. Within the list of services, select “Enterprise Search Service Application”.

    Manage Services Enterprise Search

    In the “Enterprise Search Service Application” page, click on the “Content Sources” link you’ll find situated in the left hand navigation and open/edit your “Local SharePoint Sites” content source.

    Manage Content Sources

    In the Start Addresses section, you will see a box with entries similar to what I have in my SharePoint intranet below..well almost the same:

    Content Sources Start Addresses

    You will notice the line: “sps3://my-intranet” which tells SharePoint to call a specific web service hosted at that web address. In this case, the URL is the same one I use to access my main site collection. When you have added the “sps3://” line yourself press the “OK” button to save your changes.

    There is just one last step we need to carry out: re-indexing our search. Navigate back to the “Enterprise Search Service Application” page and start full crawl.

    Manage Content Sources Recrawl

    Once this has completed all your user profiles should now be searchable.

    Enterprise People Search

  • Hooray! My first SharePoint 2010 blog post!

    I have been lucky enough to start working on my first SharePoint 2010 project. As you may know, things have definitely moved on from SharePoint 2007 to SharePoint 2010. Every new release of SharePoint seems to be a vast improvement over its predecessor that benefits both the end users and developers. But just as things get better and better, you’ll find yourself falling into the common trap of trying to apply what you have learnt in SharePoint 2007 to SharePoint 2010. I know I did.

    A good example of this is having a search page that allows users to search “All Sites” or “People”, something we would see in a SharePoint 2007 search page as standard:

    MOSS 2007 Search

    I was surprised to find out that this wasn’t the search I would get by default. The SharePoint 2010 search is quite basic and out-of-the-box as you can see from the screenshot below:

    Sharepoint 2010 Original Search

    In order to get a search page that includes both Site and People search (or as Enterprise Search as SharePoint 2010 now calls it), you have to carry out an additional step that simply requires creating a new site. So, go to “Site Actions” and click on “New Site”. When the popup opens, select the “Search” category and select “Enterprise Search”. Enter a page and name and URL name and click “Create”.

    Sharepoint 2010 New Site Enterprise Search

    If everything goes well, you should see a search page which looks like something like this:

    Sharepoint 2010 Enterprise Search Page

    Cool! So you now have the ability to carry out Site and People searches. But you may find the People search will not work if you carried out the same mistake I did where I missed out a key setting in Central Administration. I will blog about that within the next few days. TO BE CONTINUED...

    Post Updated: 30/01/2011 - Enable People Search in SharePoint 2010

  • Over the last few months I have been carrying out endless amounts of research and development to find a way to create my own eCommerce styled search similar to the likes of what eBay and Amazon use. Otherwise known as “Faceted Search”, whereby the search results are filtered through a series of facets belonging to your search criteria. Each facet typically corresponds to the possible values of a property common to a set of objects.

    Sounds very difficult and complex doesn’t it! Smile Even to this very day, I am sure eBay and Amazon must use some kind of “magic” to get their search to work in a seamlessly and efficient format.

    There are numerous search solutions out there that could help you achieve in making this type of search. From my experience I couldn’t find any low cost out-of-the-box solutions that would help me in making my own search. Majority of the search vendors were not only very expensive but they also required a quote to tailor make a solution for you.

    In the early stages I tried expanding my Lucene.NET knowledge, but I couldn’t find a flexible way to introduce facets into my search. I must admit I am not exactly an expert in Lucene and this could have also had a part to play in failing miserably.

    When I thought all was lost and there was no chance in hell in being able to figure this thing out, I luckily came across a few blog and StackOverflow posts by a guy called Mauricio Scheffer. Mauricio seems to be the brains behind the .NET client version of a search platform called: SolrNet. SolrNet is a  Solr client library built for the .NET Framework. This is one of the strengths of Solr. It can be consumed within other development platforms such as Python and Ruby.

    SolrNet just happened to be an ideal solution to what I was looking for and with just over a weeks development I was able to build my own basic search, which looks something like this:

    SolrNet1 SolrNet2

    As you can see from my screenshots, you can carry out a search by report type and/or global text search. In addition, the showing and hiding of the facet objects are purely dependent on the searches returned.

    SolrNet is a very flexible package and I know just enough to implement the basics. But I was really surprised on how well the searches performed even with the most basic implementation. So I am looking forward to adding additional features as over the next few months and perfecting both my Solr search index and code.

    I won’t be posting the code that I used to create my search since its quite a big project and tailor made specific to my database architecture. But here are a few links that I found useful to get me started in the world of SolrNet:

  • Over the last few days I have been doing some research on the best way to implement search functionality for a site I am currently building. The site will consists mainly of news articles. The client wanted a search that would allow a user to search across all fields that related to a news article.

    Originally, I envisaged writing my own SQL to query a few tables within my database to return some search results. But as I delved further into designing the database architecture in the early planning stages, I found that my original (somewhat closed minded) approach wouldn't be flexible nor scalable enough to search and extract all the information I required.

    From what I have researched, the general consensus is to either use SQL Full Text Search or Lucene.NET. Many have favoured the use of Lucene due to its richer querying language and generally more flexible since you have the ability to write a search index tailored to your project. From what I gather, Lucene can work with any type of text data. For example, you not only can index rows in your database but there are also solutions to support indexing physical files in your application. Neat!

    I have written some basic code (below) with a couple methods to get started in creating a search index and carrying out a multi-query search across your whole index. You would further enhance this code to only carry out a full index once all required records have been added. Most implementations of Lucene would use incremental indexing, where documents already in the index are just updated individually, rather than deleting the whole index and building a new one every time. I plan to hook up and optimise my Lucene code into a service that would be scheduled to carry out an incremental index every midnight.

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using Lucene;
    using Lucene.Net;
    using Lucene.Net.Store;
    using Lucene.Net.Analysis;
    using Lucene.Net.Analysis.Standard;
    using Lucene.Net.Index;
    using Lucene.Net.Documents;
    using Lucene.Net.QueryParsers;
    using Lucene.Net.Search;
    using System.Configuration; 
    
    namespace MES.DataManager.Search
    {
        public class Lucene
        {
            public static void IndexSite()
            {           
                    //The file location of the index
                    string indexLocation = @ConfigurationManager.AppSettings["SearchIndexPath"];
    
                    Directory searchDirectory = null;
    
                    if (System.IO.Directory.Exists(indexLocation))
                        searchDirectory = FSDirectory.GetDirectory(indexLocation, false);
                    else
                        searchDirectory = FSDirectory.GetDirectory(indexLocation, true); 
    
                    //Create an analyzer to process the text
                    Analyzer searchAnalyser = new StandardAnalyzer(); 
    
                    //Create the index writer with the directory and analyzer.
                    IndexWriter indexWriter = new IndexWriter(searchDirectory, searchAnalyser, true);
    
                    //Iterate through Article table and populate the index
                    foreach (Article a in ArticleBLL.GetArticleDetails())
                    {
                        Document doc = new Document();
    
                        doc.Add(new Field("id", a.ID.ToString(), Field.Store.YES, Field.Index.UN_TOKENIZED, Field.TermVector.YES));
                        doc.Add(new Field("title", a.Title, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));
                        doc.Add(new Field("articletype", a.Type.TypeName, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES)); 
    
                        if (!String.IsNullOrEmpty(a.Summary))
                            doc.Add(new Field("summary", a.Summary, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));                
    
                        if (!String.IsNullOrEmpty(a.ByLineShort))
                            doc.Add(new Field("bylineshort", a.ByLineShort, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));                    
    
                        if (!String.IsNullOrEmpty(a.ByLineLong))
                            doc.Add(new Field("bylinelong", a.ByLineLong, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));                   
    
                        if (!String.IsNullOrEmpty(a.BasicWords))
                            doc.Add(new Field("basicwords", a.BasicWords, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));                   
    
                        if (!String.IsNullOrEmpty(a.MediumWords))
                            doc.Add(new Field("mediumwords", a.MediumWords, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));                   
    
                        if (!String.IsNullOrEmpty(a.LongWords))
                            doc.Add(new Field("longwords", a.LongWords, Field.Store.YES, Field.Index.TOKENIZED, Field.TermVector.YES));  
    
                        //Write the document to the index
                        indexWriter.AddDocument(doc);
                    }
                              
    
                    //Optimize and close the writer
                    indexWriter.Optimize();
                    indexWriter.Close();         
            }
    
            public static List<CoreArticleDetail> SearchArticles(string searchTerm)
            {
                Analyzer analyzer = new StandardAnalyzer(); 
    
                //Search by multiple fields
                MultiFieldQueryParser parser = new MultiFieldQueryParser(
                                                                    new string[]
                                                                    {
                                                                        "title",
                                                                        "summary",
                                                                        "bylineshort",
                                                                        "bylinelong",
                                                                        "basicwords",
                                                                        "mediumwords",
                                                                        "longwords"
                                                                    },
                                                                    analyzer); 
    
                Query query = parser.Parse(searchTerm); 
    
                //Create an index searcher that will perform the search
                IndexSearcher searcher = new IndexSearcher(@ConfigurationManager.AppSettings["SearchIndexPath"]); 
    
                //Execute the query
                Hits hits = searcher.Search(query);
    
                List<int> articleIDs = new List<int>(); 
    
                //Iterate through index and return all article id’s
                for (int i = 0; i < hits.Length(); i++)
                {
                    Document doc = hits.Doc(i);
    
                    articleIDs.Add(int.Parse(doc.Get("id")));
                } 
    
                return ArticleBLL.GetArticleSearchInformation(articleIDs);
            }
    
        }
    }
    

    As you can see, my example allows you to carry out a search across as many of your fields as you require which I am sure you will find useful. It took a lot of research to find out how to carry out a multi query search. Majority of the examples I found over the internet showed you how to search only one field.

    The main advantage I can see straight away from using Lucene is that since the search data is held on disk, there is hardly any need to query the database. The only downside I can see is problems being caused by the possibility a corrupt index.

    For more information on using Lucene, here are a couple of links that you may find useful to get started (I know I did):

    http://www.codeproject.com/KB/library/IntroducingLucene.aspx http://ifdefined.com/blog/post/Full-Text-Search-in-ASPNET-using-LuceneNET.aspx