Blog

Categorised by 'Databases & SQL'.

  • When using the “Deploy to Azure Database” option in Microsoft SQL Management Studio to move a database to Azure, you may sometimes come across the following error:

    Error SQL71562: Error validating element [dbo].[f]: Synonym: [dbo].[f] has an unresolved reference to object [server].[database_name].[table_name]. External references are not supported when creating a package from this platform.

    These type of errors are generated as you cannot setup a linked server in Azure and queries using four-part [server].[database].[schema].[table] references are not supported. I’ve come across a SQL71562 error in the past, but this one was different. Generally, the error details are a lot more helpful and relates to stored procedures or views where a table path contains the database name:

    Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name]

    Easy enough to resolve. The error I was getting this time threw me as it didn’t point me to any object in the database to where the conflict resides and would require me to look through all possible database objects. This would be easy enough to do manually on a small database, but not a large database consisting of over 50 stored procedures and 30 views. Thankfully, SQL to the rescue...

    To search across all stored procedures and views, you can use the LIKE operator to search against the database’s offending system objects based on the details you can gather from the error message:

    -- Stored Procedures
    SELECT OBJECT_NAME(object_id),
           OBJECT_DEFINITION(object_id)
    FROM sys.procedures
    WHERE OBJECT_DEFINITION(object_id) LIKE '%[database_name]%'
    
    -- Views
    SELECT OBJECT_NAME(object_id),
           OBJECT_DEFINITION(object_id)
    FROM sys.views
    WHERE OBJECT_DEFINITION(object_id) LIKE '%[database_name]%'
    
  • Whilst trying to import a .bacpac database exported from an SQL Azure platform onto Microsoft SQL Server 2012, I received the following error:

    Internal Error. The internal target platform type SqlAzureDatabaseSchemaProvider does not support schema file version '2.5'. (File: C:\Users\Surinder\Downloads\SurinderAzureDB.bacpac) (Microsoft.Data.Tools.Schema.Sql)

    Now it seems many people online have been encountering this horrible error and majority of my fellow bloggers have suggested that installing the SQL Server Data Tools – December 2012 Update for VS 2010 or 2012 will resolve this issue. Unfortunately installing this alone did not help me. I found out that I missed one key part of my SQL Server installation: Service Pack 2!

    Can't believe this one small oversight caused me such a headache.

    You can download Microsoft SQL Server 2012 SP2 from the following link: https://www.microsoft.com/en-gb/download/details.aspx?id=43340.

  • It's a shame that I have come quite late in the game when it comes to fully utilising the benefits of stored procedures. I am more than familiar on the general uses, such as CRUD operations, passing and returning parameters. Standard stuff!

    When I decided to work on a new project, I made a conscious decision to place all my database queries within stored procedures for the following reasons:

    • To broaden my current knowledge.
    • Remove all queries from web application level for maintainability.
    • Attempt to use logic in my stored procedures to do the brunt of the processing in one database call.
    • Increase speed of execution and performance for when a procedure is run multiple times.

    Out all of the points listed above, my main aim was to try and minimise all my database calls and get the data in a format that can be used directly without any further manipulation at application level.

    As you can see from the following stored procedure, I am carrying out checks to ensure specific data elements exist prior to carrying out further database operations. Thus, hopefully improving performance an efficiency.

    CREATE PROCEDURE spLogTagActivity
    (
        @ItemPrimaryHashtag AS nvarchar(100),
        @Hashtag AS nvarchar(100),
        @TagCategoryID AS int
    )
    AS
    BEGIN
        -- SET NOCOUNT ON added to prevent extra result sets from
        -- interfering with SELECT statements.
        SET NOCOUNT OFF;
        
        DECLARE @TagID AS int
        DECLARE @TagIsIgnored AS bit
        DECLARE @ItemID AS int
        DECLARE @ItemTagID AS int
            
        --Get TagID and Tag Ignore value based on the text value
        SELECT @TagID = ID, @TagIsIgnored = IsIgnored FROM Tag WHERE TextLowercase = LOWER(@Hashtag) COLLATE SQL_Latin1_General_CP1_CS_AS
        
        SET @ItemID = (SELECT ID FROM Item WHERE PrimaryHashtag = @ItemPrimaryHashtag)
    
        --If an Item can be found based on the Primary Hashtag, we will use the data.
        IF (@ItemID IS NOT NULL)
        BEGIN
            IF (@TagID IS NULL AND @TagIsIgnored IS NULL)    
            BEGIN
                --Only insert tags that are not the same as the Item Primary Hashtag
                IF (LOWER(@ItemPrimaryHashtag) <> LOWER(@Hashtag))
                BEGIN
                    EXEC spInsertTag @TagCategoryID, @Hashtag, 0, 0, NULL, NULL, NULL, 0, @TagID OUTPUT
                END
                                            
                --Associate new tag to the Item.
                IF (@TagID IS NOT NULL)
                    BEGIN
                        EXEC spInsertSingleItemTag @TagID, @ItemID, 1, @ItemTagID OUTPUT
                                                        
                        --Create a new Activity line for the Item.
                        EXEC spInsertTagTracking @ItemTagID
                    END
            END    
            ELSE
            BEGIN
                --If a Tag has already been logged, we will create a new Activity line for the Item
                --only if it does not have an ignored status
                IF (@TagID IS NOT NULL AND @TagIsIgnored <> 1)    
                BEGIN
                    --Associate the Tag to an Item, if it doesn't already exist.
                    SET @ItemTagID = (SELECT ID FROM ItemTag WHERE TagID = @TagID AND ItemID = @ItemID)
                    
                    IF (@ItemTagID IS NULL)
                    BEGIN
                        EXEC spInsertSingleItemTag @TagID, @ItemID, 1, @ItemTagID OUTPUT                                        
                    END        
                                        
                    EXEC spInsertTagTracking @ItemTagID    
                END
            END
        END
        ELSE
        BEGIN
            IF (@TagID IS NULL AND @TagIsIgnored IS NULL)    
            BEGIN    
                EXEC spInsertTag @TagCategoryID, @Hashtag, 0, 0, NULL, NULL, NULL, 0, @TagID OUTPUT    
            END
        END
    END
    GO
    

    I won't go into detail on what this sample stored procedure is doing, but I am delivering different outcomes based on the conditional statements. This is something that could have been done at application level, but much nicer in one database call. In other procedures I have utilised CASE and WHILE operators.

    The benefits of using stored procedures far outweigh the negatives...Negatives!? Yes negatives...Well I just have one small gripe. If you create a stored procedure that contains quite a bit of meaty logic and find that something is not working (even though syntactically correct), it's quite difficult to debug exactly what point the procedure is failing.

    To end the post, here's something surprising I found: The execution time of running a simple SELECT query can be slow when used within stored procedures, even though the query itself is fast. This is something I've encountered myself when wanting to retrieve a single value from one of my tables. The only way to get around this, is by passing you query parameter to a local variable inside the stored procedure. Take a look at this answer on StackOverflow. Strange!

  • Grouping records by their respective month and year based on a date column can be done really easily by simply running the following SQL query:

    SELECT
           DATEPART(MONTH, DownloadDate) AS Month,
           DATEPART(YEAR, DownloadDate) AS Year,
           COUNT(DownloadID) AS NumberOfItems
    FROM
           Download
    GROUP BY
           DATEPART(YEAR, DownloadDate), DATEPART(MONTH, DownloadDate)
    ORDER BY
           Year DESC, NumberOfItems DESC
    

    As you can see from the query (above), I am using the "DownloadDate" column to group my records and to also get a count of how many records belong to that month and year.

  • I don’t generally have a problem importing an Excel spread sheet into one of my SQL Server tables. But today would end my run of Excel importing perfection.

    I experienced an problem where all rows that only contained numbers were ending up as NULL in my table after import, which I thought was strange since the Excel spread sheet did not contain empty cells. It contained a mixture of data formats: text and numbers.

    I decided to format all rows in my spread sheet to text and try another re-import. No change.

    After much experimentation, the solution was to copy all columns and paste them into Notepad in order to remove all formatting inherited from Excel. I then re-copied all my data from Notepad back into my spread sheet and carried out another import. Lo and behold it worked!

    I don’t understand why I had this problem. It could have been due to the fact the spread sheet contained cells of different data formats and causing confusing through the import process.

  • UPDATE queries are great to modify all information stored within our column. But what if we needed to only update a specific phrase or word within our column?

    Well, we can use an UPDATE query along with a REPLACE function. It can be used like this:

    UPDATE <table_name>
    SET <column_name> = Replace(<column_name>, 'old phrase', 'new phrase')
    
  • I always used NULL and DBNULL interchangeably in my coding around my database results without ever considering what were the differences. Fortunately, I stumbled upon a great blog written by Bilal Haidar, "Difference between NULL and DBNull"

    If I understand correctly, you use DBNULL to check if a certain field in a database is null and you would use NULL to check if a whole record is not found in the database.

  • Published on
    -
    1 min read

    "Timeout expired" in SQL Server 2005

    If you are running a query that needs to make changes to quite a few rows within a database (in my case 8700 rows). You might get an error exception pop up which says the following:

    "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding."

    After some investigating I found that there are two ways to overcome this error:

    1. Use Database Engine Query (or formally known as Query Analyser) to run your query instead of using SQL Panel in table view. I think that the SQL Query panel within table view is only used for smaller query operations. Thats just my guess because it didn't work for me. When using Database Engine query there is pretty much no limit to the amount of data you need.

    2. Override connection timeout in SQL Server Options:


    I found an option to change the timeout limit in the least place I expected. Under the "Designers" selection within SQL Server Options you will be able to change the timeout amount. But you cannot uncheck the override amount, you will have to specify the amount of seconds you require. 

  • Published on
    -
    1 min read

    ASP.NET Login Authentication Problem

    I was trying to create a Login page for my website in ASP.NET a couple of days ago and I was stumped that the following piece of code did not work:

    private bool SiteLevelCustomAuthenticationMethod(string User, string Password)
    {   
       bool boolReturnValue = false;
       DataTable dtAuthUsers = SomeBLL.GetUsers();
       if (dtAuthUsers != null && dtAuthUsers.Rows.Count > 0)
       {
           DataView dvAuthUsers = dtAuthUsers.DefaultView;
           foreach (DataRowView drvAuthUsers in dvAuthUsers)
           {
                if (User == drvAuthUsers.Row["User"].ToString() && Password == drvAuthUsers.Row["Password"].ToString())
                {
                    boolReturnValue = true;
                }
            }
        }
        return boolReturnValue;
    }
    
    protected void LoginControl_Authenticate(object sender, AuthenticateEventArgs e)
    {
        bool Authenticated = false;
        Authenticated = SiteLevelCustomAuthenticationMethod(LoginControl.UserName, LoginControl.Password);
        e.Authenticated = Authenticated;
        if (Authenticated == true)
        {
            FormsAuthentication.RedirectFromLoginPage(string.Format("{0}", LoginControl.UserName), false);
        }
    } 
    

    Now after numerous debug sessions on this code, I could not find a thing wrong with it. The correct Username and Password was getting parsed to the Database but still the 'SiteLevelCustomAuthenticationMethod' function was still returning a false.

    What was causing this problem was actually quite simple (even though it had taken my a long time to solve!). Basically, in the User's table in my database had the following columns:

    1. User_ID ------> DataType: int
    2. Username ------> DataType: nvarchar
    3. Password ------> DataType: char

    Now this table look alright doesn't it? Well it isn't. The problem lies within the 'Password' column. Since this column format is 'char' with a length of 25, when you enter a password that is less than 25 characters in length a space will be added after to fill out the data length. For example, if you had a password that was 10 characters long, an extra 15 characters of spacing will be added after your password.

    In order to fix this problem, I changed the Password DataType to 'nvarchar', which solved the problem.

  • Published on
    -
    1 min read

    UNION ALL

    The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type.

    One of my work colleagues at work showed me something that I never knew about UNION SQL statement. By default the UNION statement does the exact same thing as a SELECT DISTINCT on the resulting data. But what if we want to carry out a UNION without distincting the data result? Thats where the UNION ALL comes into play.

    Example

    Table 1: tblCars-UK

    Car_ID Car_Name
    1 Volvo
    2 Volkswagen
    3 Chevrolet
    4 Nissan
    5 BMW

    Table 2: tblCars-US

    | Car_ID | Car**_Name** | | 1 | Pontiac | | 2 | Chrysler | | 3 | Chevrolet | | 4 | Dodge | | 5 | BMW |

    If we used a UNION statement, the results would be as follows:

    Select car_name from tblCars-UK
    UNION
    Select car_name from tbleCars-US 
    

    | Car_Name | | Volvo | | Volkswagen | | Chevrolet | | Nissan | | BMW | | Pontiac | | Chrysler | | Dodge |

    As you can see from the results above that the duplicate car entries have been removed and only displayed once. Now this is what will happen if we use UNION ALL statement:

    Select car_name from tblCars-UK
    UNION ALL
    Select car_name from tbleCars-US
    

    | Car_Name | | Volvo | | Volkswagen | | Chevrolet | | Nissan | | BMW | | Pontiac | | Chrysler | | Chevrolet | | Dodge | | BMW |