Category Archives: All Articles

Why you should upgrade to SQL Server 2014:

I shared my blog with my brother Zulfi Haja, he is a SQL Server Consultant too and works for an investment bank in London,UK. Having reviewed the top 5 features of SQL Server 2014, He shot me an email saying, Bro the effort you put on summarizing the top 5 features was worth reading and it’s a neat approach, would definitely recommend it to my colleagues, but I am still not convinced for an upgrade to SQL Server 2014. Can you help?

I thought only a Tech Evangelist from Microsoft will be able to answer him. I took the question from him with a little hesitation and replied back saying, “I don’t know how long will it take but surely will get back to you once I find a convincing answer”. He said, All the best!!

Was looking out for users experience with Enterprise 2014 version across my network circle and over the blogs, it was of no use. In the beginning of May I joined a local SQL Server User group and they invited me for their monthly meeting. To my surprise there was a presentation about In Memory OLTP and SQL Server 2014 by a Product Specialist from Microsoft. I said to myself, I found the guy and this Product Specialist and Tech Evangelist should be the one to answer my question. His presentation was very convincing enough and I found the answer from his talk. After the presentation the Q&A session was open and I asked him the same question and he gave me the expected answer from presentation. It is SQL Server 2014 can now support Tier1 application.

Let me elaborate this, if you have hanged out with a developer crowd from different domains they have this pre destined notion with them where they will give you an analogy, Oracle is like a business class of a flight and major critical applications are built using it and SQL Server is like an economy class where middle tiered companies uses it. In fact, of all my projects 60-70% of them will be the front end applications supported by Oracle and we had the data replicated to us in SQL Server environments for our reporting and analysis purposes. With SQL Server 2014 that pre destined notion is no more and as per my conversation with the Tech Evangelist, SQL Server 2014 will be good enough to support Tier1 applications and compete with Oracle supported systems because of its new DB engine design. Also he hinted out saying, if NASDAQ uses SQL Server why not your applications?

So Oracle folks better be aware of it :)

#JunaithHaja

Subversion in Visual Studio 2013 using TortoiseSVN and AnkhSVN:

I would highly recommend a developer to understand the concept of Subversion before accessing any projects stored in a repository.

Subversion is a generic term and developers uses this technology to store their source code, .proj, .soln,web pages and other project related files in a repository. Developers working in a team will be able to download the files and upload the files to the repository with any code related changes thereby providing a single point of access. Subversion belongs to Software Configuration and Change Management field. Different subversion products are available in the market like ApacheSVN, TortoiseSVN, etc. We would use TortoiseSVN for our below illustration.

TortoiseSVN is an open source subversion product developed by Collabnet. We create our repository using TortoiseSVN software. We use the AnkhSVN plugin to access the repository through Visual Studio. In other words, AnkhSVN is an add-on used to integrate Microsoft Visual Studio and Tortoise SVN.

First download and install the latest version of TortoiseSVN 1.8.7 and AnkhSVN 2.5.124751. The installation is straight forward process.

You can verify the installation of TortoiseSVN by right clicking your mouse on any Windows Explorer and you will be able to see options for SVNCheckout and TortoiseSVN in it. The AnkhSVN installation can be verified by opening the Visual Studio and click File, you should see an option for Subversion.

In most real time scenarios a repository will be already existing, so let’s jump on How to access an existing repository.

Copy a Read only Solution from Repository:

If you want to get a read only copy of a solution file,     

Step 1: Create an empty folder at your desired location. Let’s call it SQLCASTReadCopy.

Step 2: Right Click the folder, Select TortoiseSVN and Click Export

Export a Project File from Repository using TortoiseSVN

Step 3: Enter the location of the Solution/Project file in the URL Repository drop down   and the location of the above created folder SQLCASTReadCopy in the Export Directory field and click OK, you will have a copy downloaded from the repository to the above folder.

Export an SSIS Solution from Repository using TortoiseSVN

Now you can access the downloaded copy from your local machine and you will not be able to write any changes to the repository.

Copy a Read and Write Solution from Repository:

Step 1: Create an empty folder at your desired location and call it SQLCASTWriteCopy

Step 2: Open the Visual Studio 2013, Click File -> Open -> Open from Subversion

Step 3: Enter the location of the solution/project file at the repository in the From field of the dialog box and enter the location of above created local folder location SQLCASTWriteCopy from Step1 in the To field and Click OK.

Now you will have a read/write copy downloaded to the above location.

Your changes can be written to the repository by clicking COMMIT and UPDATE in the Pending Changes window of Visual Studio 2013.

In any of the above process if the dialog box prompts for your NTlogin and password, enter it and it will validate with the Active Directory.

Hope it Helps:-)

 #JunaithHaja

Delete last two characters of a Column in SQL Server 2014:

One of my colleague asked me this question, Is there a way to generalize a function to remove last two (n) characters from a column if the column doesn’t has a fixed length.  Yes it can be as below

Syntax: LEFT (ColumnName,LEN(ColumnName) – n)

The function calculates the length of the string and removes last n characters using LEFT() function.

Here is the kicker I wanted to share with you, Is it possible to flip the above function using RIGHT() to remove first n characters from a Column(string), yes it is possible:-)

Syntax: RIGHT (ColumnName,LEN(ColumnName) – n)

I made up a sample data using AdventureWorks - Transaction table in my SQL Server 2014 version. Imagine you have the TransactionID and ProductID concatenated as TranProdID column. Removing the last two characters will give you the TransactionID and removing the first three characters will give you the ProductID.

The query can be written as

SELECT  TranProdID, LEFT(TranProdID,LEN(TranProdID)-2) as TransactionID,                       RIGHT(TranProdID,LEN(TranProdID)-3) as ProductID                     FROM [SQLCAST2014].[Production].[Transaction]

Here is the snippet from my machine,

Remove Last Two Characters from a Column

Connect Any Database Permission in SQL Server 2014:

Microsoft has introduced three new permissions namely Connect Any Database, Impersonate Any Login and Select All User Securables in its SQL Server 2014 version. The CONNECT ANY DATABASE permission seemed more interesting to me from a developer perspective as we all have come across situation if we had this permission we could have avoided a sad face:-(. In our daily life we work across different servers and databases but sad reality is that the developer doesn’t has permission to access all databases across every server he works. Classic example is a developer working for marketing team having access only for a Marketing database and may not have access to Finance database. So what happens if he accidentally hits the Finance database? A dialog box opens with a message saying “The database is not accessible” this caused a sad face in me many times. The CONNECT ANY DATABASE permission is the solution for it by which it will allow the developer to connect any database in the server and will not have him access to the underlying DB objects thereby preventing the dialog box.

Let’s try demonstrating it.

I created a SQL Server Authentication login for this demonstration and name it as SqlCastTest. Its default permissions can be viewed by Right Clicking the Login and opening its Properties window under tab Securables. The Effective permissions could be seen like below.

Default Permission

Let’s try opening the database SQLCAST2014 through this login in object explorer and it will pop an error message as follows.

Error Message in SQL Server 2014

 

Let’s give CONNECT TO ANY DATABASE permission to our SqlCastTest login by executing the below script

USE [MASTER]

GO

GRANT CONNECT ANY DATABASE TO [SqlCastTest]

GO

Now let’s try connecting to the same database and it will let us connect to it as follows.

Connection Established through Connect Any Database Permission

 

To verify our drill let’s check the new permission available at Securables tab in Login,

Connect Any Database Permission Check at Login Level

See the CONNECT ANY DATABASE is effective and associated with our test login like below.Keep in mind, CONNECT ANY DATABASE will not allow access to the DB objects and it’s only a connect permission to the existing and future databases in the server.

Hope it helps:-)

#JunaithHaja

Select * into using Parallelism in SQL Server 2014

Microsoft has improved the way Select * into works in SQL Server 2014 by enabling the Select * into to run in Parallel mode.In this mode the query will run parallel across available processors providing a significant improvement in performance. I have used the Production.TransactionHistory  table from AdventureWorks database and altered the table to hold 27 million records for this illustration and named it Production.TransactionHistorySample.The database compatibility level should be set to a minimum of 110 for this feature to work.

The below code took only 4:53 minutes to execute in SQL Server 2014

Select * into using Parallelism SQL Server 2014

The same code took 11:53 minutes to run in SQL Server 2008R2. So awesome is the parallel  mode execution in SQL Server 2014. On a quicker note, a developer can make use of this feature in creating staging tables which will be very fast. Do try this feature when you get a chance and share with me your findings:-)

#JunaithHaja

Delayed Transaction Durability In SQL Server 2014

Microsoft has introduced a new feature called Delayed Transaction Durability with its SQL Server 2014 version which aims to reduce the delays happening at the transaction level and to facilitate a full time availability of the database to the front end client application. By default, the transactions in SQL Server are durable which means the committed transactions will remain in the system even after a system failure. With its 2014 version Microsoft has introduced two types of durability in its transactions as follows,

1. Full Transaction Durability and 2. Delayed Transaction Durability.

Full Transaction Durability:

Consider a data entry user using a front end application built by ASP.Net backed by a SQL Server database. When the data entry operator enters 100 records the data gets written into the log file of the database and when he hits submit button from the Client application, the 100 records are moved from the log file into the data file. During this process the user will not get the control of client application until write process from log file to data file gets completed and he wouldn’t be able to process/enter any data at front end level (This corresponds to the processing screen which appears when we submit data over a website – annoying right?).Once the write process is completed, the log file becomes available and the user gets availability of the Front End and continues with his work. This process is called Full Transaction Durability. On the pros side there are no data loss and on the cons side there will be a significant latency at the client side application.

Delayed Transaction Durability:

In this method a buffer is used at the log file and the data is sent periodically to the data file whenever the buffer gets filled up. Imagine if the buffer has the capacity to hold up to 25 records flushes the data to a data file once it gets filled up. Flipping the above said data entry process for this method, the data gets written into the data file when the user enters the 26th record and when he enters the 100th record and hits submit, 75 records were already moved to the data file and only 25 records needs to be moved which will be taken care by the buffer flush and the client application is readily available to the user when the write happens which mean a no wait time. This process is called Delayed Transaction Durability. The latency at client side is significantly reduced and user is given an all-time availability of the client application. The DBA has to configure the database to handle full durable/delayed durable transactions.

Couple of month’s back one of my colleague had an issue of the log file getting filled up at a rapid pace and the nightly jobs were getting failed. I hope configuring the data base with Delayed Transaction Durability will enable periodic flushing of data from log file to data file which will prevent the log file from getting filled up. I am going to recommend this to my colleague. Hope you all will:-)

#JunaithHaja

Inline Specification of Indexes

We know from our experience with previous versions of SQL Server’s that there are two ways to create an index in a table.

One is to Right click the table at Object Explorer level and create it.

Second is to explicitly write a T-SQL statement starting with Create Index on Table name after the create table statement.

There is a single pitfall in the above two methods, the index cannot be created simultaneously within table definition like a Primary key or Foreign key constraint. SQL Server 2014 overcomes this draw back by facilitating to create an index within the Create table definition which is referred to as Inline Specification for Index creation.

Hence the index could be defined as below within the Create table statement,

CREATE TABLE DBO.FRUITSHOP (

ITEMNAME VARCHAR (25) NULL,

QUANTITY INT NOT NULL,

UNITPRICE FLOAT NOT NULL,

INDEX IX_RATE NONCLUSTERED (ITEMNAME, QUANTITY)

The above code will create a Non Clustered Index named IX_RATE and can be checked from SYS.INDEXES table by using the below query.

SELECT * FROM SYS.INDEXES WHERE NAME=‘IX_RATE’

Create Index by Inline Specification in SQL Server 2014Create Index by Inline Specification in SQL Server 214

Create Index by Inline Specification in SQL Server 2014Create Index by Inline Specification in SQL Server 2014

Create Index by Inline Specification in SQL Server 2014However the  above create table code will fail in SQL Server 2012 and earlier versions with the following error.

Msg 1018, Level 15, State 1, Line 7

Incorrect syntax near ‘INDEX’. If this is intended as a part of a table hint, A WITH keyword and parenthesis are now required. See SQL Server Books Online for proper syntax.

Hence 2014 makes our coding easier. The Inline specification can be extended for Clustered and Non Clustered Indexes with varying number of columns as needed and can be found in MSDN site.

Hope it helps!

In Memory OLTP

I would say the top notch feature of SQL Server 2014 is In Memory OLTP concept. Do you know how we view an image or video stored in our hard disk? The image to be viewed is brought to the Main Memory from the hard disk through the software service and it helps us to see the image. The same happens in case of SQL Server too. Generally, the data stored in SQL Server is in the form of data tables called disk based tables (The disk based tables are our regular tables which we were using from 2000 till now). Consider if we run a query against the disk based tables using the SQL Server Management Studio, the query gets executed and the result set is produced to us through the main memory. In other words, the result set lies in the memory as compared to the original data from a disk based table stored in the secondary storage device. Technically there is no storage happens at main memory level. Hope this reminds us all of a concept we studied in our school days that Main memory is meant for faster processing and secondary storage is for storing the data, am glad it didn’t change. One obvious question hits our mind now; with all the advancement in DB applications is it possible to store data in main memory? Yes we can. It’s called Memory Optimized tables, a new feature built by Microsoft for SQL Server 2014 version

Hence we define Memory Optimized tables as the tables which store the data in the memory all time. Microsoft has introduced a new In Memory OLTP engine to work with Memory optimized tables. One awesome feature of memory optimized tables is there can be no locks issued to a memory optimized table which ensures no waiting because of blocking and data is available at all time. Having the data staged in the memory and no blocking of data through any locks significantly ensures faster processing and easier accessing of data by applications.

Did I not realize one thing that data in a memory will be lost in the event of power failure, so what happens if the server gets shut down or a system crash happens? Yes of course the data stored in the memory will be lost but it can be recovered when the server is up and running through Checkpoints which keeps a local backup of the data in memory optimized table as a file group in the hard disk. Memory optimized table does comes with lot of pros and few cons which I wouldn’t be able to summarize in a single post. I would recommend you to go through the White Paper Microsoft released about In Memory OLTP over this weekend and remember to utilize this feature when we get to work with 2014.Here is the link for the White paper.

I would like to finish up this post with the syntax to create a Memory Optimized table as below,

Create Table TableName(

Column1 Datatype,

Column2 Datatype,

Column3 Datatype

) (WITH MEMORY_OPTIMIZED=ON)

                                                                                                                      -Junaith Haja               

Top 5 features of SQL Server 2014:

Microsoft has officially launched its SQL SERVER 2014 this April at an event held in San Francisco, CA. Microsoft adds some new features and updates some of its existing features in every other SQL Server version. Sooner or later all our systems will be updated with this version.

So what does it means to a developer?

Is it a good sign or bad sign to a developer?

Did Microsoft made any ground breaking changes to its product?

Should I have to reinvigorate my T-SQL skills to work with 2014?

A Developer mind is bombarded with so many questions like above when we hear about a version upgrade. Don’t worry folks, I have got your back. I was excited enough like every other developer to get the 2014 version within first few weeks of its launch and done my review of it. I did learn a lot and my goal is to share some of its valuable insights a SQL Server developer must know to work with 2014. I will be posting my top 5 favorite features of SQL Server 2014 as a series over next few days. Please keep tuned.

Cheers!!

-Junaith Haja