Blog

Posts tagged by: "sql"

Error - Platform type SqlAzureDatabaseSchemaProvider does not support schema file version '2.5'

Posted in: Databases & SQL

Whilst trying to import a .bacpac database exported from an SQL Azure platform onto Microsoft SQL Server 2012, I received the following error:

Internal Error. The internal target platform type SqlAzureDatabaseSchemaProvider does not support schema file version '2.5'. (File: C:\Users\Surinder\Downloads\SurinderAzureDB.bacpac) (Microsoft.Data.Tools.Schema.Sql)

Now it seems many people online have been encountering this horrible error and majority of my fellow bloggers have suggested that installing the SQL Server Data Tools – December 2012 Update for VS 2010 or 2012 will resolve this issue. Unfortunately installing this alone did not help me. I found out that I missed one key part of my SQL Server installation: Service Pack 2!

Can't believe this one small oversight caused me such a headache.

You can download Microsoft SQL Server 2012 SP2 from the following link: https://www.microsoft.com/en-gb/download/details.aspx?id=43340.

Tools Every Azure Developer Should Be Using

Posted in: Azure

Having developed quite a few websites in Azure, there are some key tools I found that made my life easier when accessing all areas of my Azure cloud instance. The great thing about the selection of tools I have listed below is that it gives me access to all the features I need wrapped in a nice interface.

So lets get to it!

Azure Storage Explorer

Azure Storage Explorer is a useful tool for inspecting and altering the data in your Azure storage projects, including the logs of your cloud-hosted applications. This includes:

  • Blobs
  • Queues
  • Tables

Unlike some of the previous storage explorer software I've used in the past, Azure Storage Explorer allows you to preview a blob directly through its interface, such as: Images, Video or Text files. So you don't have to waste time downloading a blob just to check if its been generated correctly. Amazing time saver!

Once you have your storage set up within your Azure account, you can use this application to manage everything: create, view, copy, rename and delete all three types of storage types (listed above).

Azure Storage Explorer

An application as full featured as this shouldn't be free. But luckily for us, it is.

Download: https://azurestorageexplorer.codeplex.com/

Azure User Management Console

Azure User Management Console manages the users and logins of an Azure SQL database. The tool is simply converting your action into T-SQL commands and execute them against an Azure database of your choice.

Azure User Management Console

What some beginner Azure developers do is they use the same master credentials that is assigned to the database on creation within their web application too. Of course, this master user has full "db_owner" privileges against the database. Not a good idea! This application allows you to create a new new user with restricted access access levels really easily.

Download: https://aumc.codeplex.com/

Redgate SQL Azure Backup

One thing I found lacking in Azure SQL databases is the ease of creating a regular backup. There doesn't seem to be an automated way to do this directly through the Azure account.

I've been toying around with Redgate's Azure backup service and that seems to do the job quite nicely. But it does come at a price. For a daily backup on one database will cost around £7 per month.

Full range of backup plans: http://cloudservices.red-gate.com/

CloudXplorer

Whenever I needed to take a quick look at any of my blob containers, Azure Storage Explorer would suffice for majority of cases. However, the only thing I've started noticing with Azure Storage Explorer is that it lacks the efficiency of being able to export a batch of files from a blob to local storage with ease.

CloudXplorer by ClumsyLeaf Software made browsing files within my blob container a breeze. All files were organised and displayed in a folder structure allowing me to download specific directories. The slick UI alone makes CloudXplorer a pleasure to use, especially if you have blob that is large in volume.

I have downloaded around 200MB worth of files from one of my blobs to a local drive without any issue.

Underestimating The Use of Stored Procedures

Posted in: Databases & SQL

It's a shame that I have come quite late in the game when it comes to fully utilising the benefits of stored procedures. I am more than familiar on the general uses, such as CRUD operations, passing and returning parameters. Standard stuff!

When I decided to work on a new project, I made a conscious decision to place all my database queries within stored procedures for the following reasons:

  • To broaden my current knowledge.
  • Remove all queries from web application level for maintainability.
  • Attempt to use logic in my stored procedures to do the brunt of the processing in one database call.
  • Increase speed of execution and performance for when a procedure is run multiple times.

Out all of the points listed above, my main aim was to try and minimise all my database calls and get the data in a format that can be used directly without any further manipulation at application level.

As you can see from the following stored procedure, I am carrying out checks to ensure specific data elements exist prior to carrying out further database operations. Thus, hopefully improving performance an efficiency.

CREATE PROCEDURE spLogTagActivity
(
    @ItemPrimaryHashtag AS nvarchar(100),
    @Hashtag AS nvarchar(100),
    @TagCategoryID AS int
)
AS
BEGIN
    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.
    SET NOCOUNT OFF;
    
    DECLARE @TagID AS int
    DECLARE @TagIsIgnored AS bit
    DECLARE @ItemID AS int
    DECLARE @ItemTagID AS int
        
    --Get TagID and Tag Ignore value based on the text value
    SELECT @TagID = ID, @TagIsIgnored = IsIgnored FROM Tag WHERE TextLowercase = LOWER(@Hashtag) COLLATE SQL_Latin1_General_CP1_CS_AS
    
    SET @ItemID = (SELECT ID FROM Item WHERE PrimaryHashtag = @ItemPrimaryHashtag)

    --If an Item can be found based on the Primary Hashtag, we will use the data.
    IF (@ItemID IS NOT NULL)
    BEGIN
        IF (@TagID IS NULL AND @TagIsIgnored IS NULL)    
        BEGIN
            --Only insert tags that are not the same as the Item Primary Hashtag
            IF (LOWER(@ItemPrimaryHashtag) <> LOWER(@Hashtag))
            BEGIN
                EXEC spInsertTag @TagCategoryID, @Hashtag, 0, 0, NULL, NULL, NULL, 0, @TagID OUTPUT
            END
                                        
            --Associate new tag to the Item.
            IF (@TagID IS NOT NULL)
                BEGIN
                    EXEC spInsertSingleItemTag @TagID, @ItemID, 1, @ItemTagID OUTPUT
                                                    
                    --Create a new Activity line for the Item.
                    EXEC spInsertTagTracking @ItemTagID
                END
        END    
        ELSE
        BEGIN
            --If a Tag has already been logged, we will create a new Activity line for the Item
            --only if it does not have an ignored status
            IF (@TagID IS NOT NULL AND @TagIsIgnored <> 1)    
            BEGIN
                --Associate the Tag to an Item, if it doesn't already exist.
                SET @ItemTagID = (SELECT ID FROM ItemTag WHERE TagID = @TagID AND ItemID = @ItemID)
                
                IF (@ItemTagID IS NULL)
                BEGIN
                    EXEC spInsertSingleItemTag @TagID, @ItemID, 1, @ItemTagID OUTPUT                                        
                END        
                                    
                EXEC spInsertTagTracking @ItemTagID    
            END
        END
    END
    ELSE
    BEGIN
        IF (@TagID IS NULL AND @TagIsIgnored IS NULL)    
        BEGIN    
            EXEC spInsertTag @TagCategoryID, @Hashtag, 0, 0, NULL, NULL, NULL, 0, @TagID OUTPUT    
        END
    END
END
GO

I won't go into detail on what this sample stored procedure is doing, but I am delivering different outcomes based on the conditional statements. This is something that could have been done at application level, but much nicer in one database call. In other procedures I have utilised CASE and WHILE operators.

The benefits of using stored procedures far outweigh the negatives...Negatives!? Yes negatives...Well I just have one small gripe. If you create a stored procedure that contains quite a bit of meaty logic and find that something is not working (even though syntactically correct), it's quite difficult to debug exactly what point the procedure is failing.

To end the post, here's something surprising I found: The execution time of running a simple SELECT query can be slow when used within stored procedures, even though the query itself is fast. This is something I've encountered myself when wanting to retrieve a single value from one of my tables. The only way to get around this, is by passing you query parameter to a local variable inside the stored procedure. Take a look at this answer on StackOverflow. Strange!

Get Record Count Grouped By Month and Year

Posted in: Databases & SQL

Grouping records by their respective month and year based on a date column can be done really easily by simply running the following SQL query:

SELECT
       DATEPART(MONTH, DownloadDate) AS Month,
       DATEPART(YEAR, DownloadDate) AS Year,
       COUNT(DownloadID) AS NumberOfItems
FROM
       Download
GROUP BY
       DATEPART(YEAR, DownloadDate), DATEPART(MONTH, DownloadDate)
ORDER BY
       Year DESC, NumberOfItems DESC

As you can see from the query (above), I am using the "DownloadDate" column to group my records and to also get a count of how many records belong to that month and year.

Renaming A Document Type Table Using SQL

Posted in: Kentico

Don't you just hate it when you've created a Document Type in Kentico that is full of data and for some reason you need to rename it. I know I do.

Even though Kentico does give you the ability to rename your Document Types via its interface, the old table name still exists within the SQL database. This may not bother some developers but this makes me feel damn right dirty.

I came across an SQL script Matt Lee wrote in his blog, that takes the hassle of renaming the tables manually within a Kentico database.

Definitely check it out. It has saved me a lot of time.

Source: SQL Script To Rename A Document Type Table

Count Message board Messages for a Document in Kentico

Posted in: Kentico

I’ve been using Message Boards for some page templates within my Kentico site. I needed to count the number of messages for each document to be displayed on the homepage. I couldn’t find a way to do this using Kentico’s API.

The only way I could retrieve the message count for a document is to query the database directly. The SQL query is as follows.

SELECT COUNT(*)
FROM Board_Board
JOIN Board_Message ON Board_board.BoardID = Board_Message.MessageBoardID
WHERE 
Board_Message.MessageApproved = 1 AND Board_Board.BoardDocumentID = @DocumentID

 

If anyone knows how to achieve the exact same thing through the API, please leave a comment.

Replace One Word or Phrase Through An Update Query

Posted in: Databases & SQL

UPDATE queries are great to modify all information stored within our column. But what if we needed to only update a specific phrase or word within our column?

Well, we can use an UPDATE query along with a REPLACE function. It can be used like this:

UPDATE <table_name>
SET <column_name> = Replace(<column_name>, 'old phrase', 'new phrase')

What's The Difference Between NULL and DBNull

I always used NULL and DBNULL interchangeably in my coding around my database results without ever considering what were the differences. Fortunately, I stumbled upon a great blog written by Bilal Haidar, "Difference between NULL and DBNull"

If I understand correctly, you use DBNULL to check if a certain field in a database is null and you would use NULL to check if a whole record is not found in the database.

"Timeout expired" in SQL Server 2005

Posted in: Databases & SQL

If you are running a query that needs to make changes to quite a few rows within a database (in my case 8700 rows). You might get an error exception pop up which says the following:

"Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding."

After some investigating I found that there are two ways to overcome this error:

1) Use Database Engine Query (or formally known as Query Analyser) to run your query instead of using SQL Panel in table view. I think that the SQL Query panel within table view is only used for smaller query operations. Thats just my guess because it didn't work for me. When using Database Engine query there is pretty much no limit to the amount of data you need.

2) Override connection timeout in SQL Server Options:


I found an option to change the timeout limit in the least place I expected. Under the "Designers" selection within SQL Server Options you will be able to change the timeout amount. But you cannot uncheck the override amount, you will have to specify the amount of seconds you require. 

UNION ALL

Posted in: Databases & SQL

The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type.

One of my work colleagues at work showed me something that I never knew about UNION SQL statement. By default the UNION statement does the exact same thing as a SELECT DISTINCT on the resulting data. But what if we want to carry out a UNION without distincting the data result? Thats where the UNION ALL comes into play.

Example

Table 1: tblCars-UK 

Car_ID Car_Name
1 Volvo
2 Volkswagen
3 Chevrolet
4 Nissan
5 BMW

Table 2: tblCars-US

Car_ID Car_Name
1 Pontiac
2 Chrysler
3 Chevrolet
4 Dodge
5 BMW

If we used a UNION statement, the results would be as follows:

Select car_name from tblCars-UK
UNION
Select car_name from tbleCars-US 

 

Car_Name
Volvo
Volkswagen
Chevrolet
Nissan
BMW
Pontiac
Chrysler
Dodge

 

As you can see from the results above that the duplicate car entries have been removed and only displayed once. Now this is what will happen if we use UNION ALL statement:

Select car_name from tblCars-UK
UNION ALL
Select car_name from tbleCars-US

 

Car_Name
Volvo
Volkswagen
Chevrolet
Nissan
BMW
Pontiac
Chrysler
Chevrolet
Dodge
BMW

 

;