profile for TheDaveJay at Stack Overflow, Q&A for professional and enthusiast programmers

Monday, 1 July 2013

SqlMigrations aka Continuous Integration & Deployments for MS SQL - Part II

Lets be practical.... 

This article is a continuation from Part I

In this part, I will be introducing you to something called SqlMigrations.  You will need to install the SqlPackage.exe which comes part of SSDT.

SqlMigrations is a nuget package that will turn a command line program into a sql migrations type project.

We will be completing the following practical and explaining why along the way:

1) Setting up a Sql Project

2) Setting up a SqlMigrations project
3) Adding SQL Migration upgrade and down grade scripts

Setting up a SQL project.

We are going to look at setting up a SQL project and add it to a solution in Visual Studio.

Step 1 - Add a new Sql Project: 

In your solution (if you dont have one, create one), right click on the solution and add a new project. 
Under the "Other Languages" project type, click on SQL Server, and select the SQL Server Database Project. Give it a name of "SqlMigrationsDemoDatabase":

Step 2 - Importing an existing database

So, I already have a database sitting in "production" (its actually sql express on my  location machine), and I want to base my database project to be based on it. That way I will know that what I have checked into source control will be exactly the same as what's production. If I were to make changes, I know it will be compatible with whats in production, and that my migrations scripts that have gone through testing will work live.

Visual Studio makes importing an existing databases schema and security really easy.

First, right click on our newly created project and import->database as shown below:

This pops open a wizard.  

Under the "Source database connection" section, create a new connection the the database you want to import:

And then click the start button:

When the wizard has finished, check the "Finished" button to close it down.

You will notice that the Wizard has imported your database and has create a folder structure to hold your tables, procs views etc. Its worth going to explore how the project is set up. Its a little out of scope of what we are doing here.

So now, we have our database project. What we want now is the ability to deploy the database via a command line application. You can deploy from right-clicking the project and say "Deploy", but that functionality isn't available for us from a CI/CD point of view. We also want to have a physical script file that we can keep to run against all servers in the future. 

We also want to be able to deploy the database to be created each time if it doesn't exist. Say you want to develop in your own local environment and not affect other developers. To do this, we need something can run the generated scripts and keep track on which ones have been created. 

Setting up a SQL Migrations Project

Carrying on from the last practical step, we want to be able to generate update scripts and store them to be run later. This is where the SQLMigrations package comes to the rescue (Note, its in Beta release). It will allow you to create increment scripts for updating your database, store them in a command line project, and also have the ability to run the scripts later on. 

Step 1 - Creating a SqlMigrations project:

Create a new command line project in your solution and call it "SqlMigrations":

When you have done that, install the SqlMigrations nuget package to that project. To do so, follow the steps below:

1) Open the Package manager console by clicking: Tools -> Library Package Manager -> Package Manager Console:

2) In the manager console, ensure the SqlMigrations Project is selected:

3) Install the package using the following command: Install-Package SqlMigrations

You will notice the following things are added to your SqlMigrations project:

  1. Scripts Folder: This is the folder that the tool will look for scripts in. The scripts are "Embedded resources" in the project. This was done so because it removed the "Accidental" tampering that can happen to scripts. 
  2. App.config: This is used to store the sql connection string for your "CIServer", the path to the dacpac file generated by your sql project. It also contains settings like "MDF and LDF" file path locations for newly created databases (used if they differ on each server).
  3. Deploy.ps1: I added this file for people who use Octopus Deploy. You will notice that a build/version number must be passed in order to rollback databases. If you dont supply one, the default is 0.
  4. Example code: This is the code that is required for running the app
Rebuild the whole solution to make sure there are no errors!

Setting the CIServer

At this point, there must be no database on your CI sql server. Ill explain why in part III when we get to the "Continuous Integration and Deployment" and how it will work within a team of developers.

1) Open up the App.config and set your CIServer connection string:

2) Since nuget doesnt work with SqlProjects, we need to set the relative path from the SqlMigrations project to the dacpac file (in the bin folder) in the database project. If you cant find the dacpac file, be sure to build the "SqlMigrationsDemoDatabase" project first:

3)Copy the code from the "ExampleCode.cs" file and paste it into your program.cs file, and resolve any namespace issues:

We are new ready to add our baseline script.

Adding a baseline script.

Now that we have installed the nuget package and setup the app condig, we need to create a baseline script. The baseline script is used to create the database from scratch, and for people with existing databases, it is used as the "Production Baseline" which is the script that will be run to match the existing production database that you will be building from.

To do so, follow the following steps:

1) Go to the package manager console, and ensure the SqlMigrations project is selected.
2)Type the following command: "AddSqlMigrations" 
3) You should see the following:

Dont worry about the last error message. The migration script creator tries to create a undo script, but at this point, creating an "undo database" does not exist. 

You will also notice that a new file has been added to your scripts folder:

If you open that file, you will see the sql script that was generated to create your database, and then to run all the schema scripts (create tables, views procs etc) that exist in your sql project. Since this is a special file (your baseline) we must rename this file to be "_Baseline.sql":

You SqlMigrations project will know to run this file if the database doesnt exist on the deployment server.


From this point on, you will have source controlled your database that matched the production server. In part III, we will look at CI and CD using the SqlMigrations project, but for now you can manually run the SqlMigrations.exe in your solution to auto deploy your new database. 

To manually deploy your script to your CI server, you can run the SqlMigrations project in Visual Studio.
You will see a screen like this when the console app has finished running:

If you go look at your CI server, you will see that your database has been created!

You are now ready to go ahead an create incremental update scripts.

Adding SQL Migration upgrade and down grade scripts

Adding incremental upgrade scripts is easier than the previous section. To prove this, we will go add a new table to our sql project and create the migration script for it:

1) Go to the database project and add a new table called "Item". Add 2 columns to that table called Id and Value. You can set the PK on Id and make it an identity insert if you like - up to you.
2) Save your changes to your projects and rebuild it
3) Go the package manager console and use the "AddSqlMigration" command again.
4) Have a look at the 2 new files added to your SqlMigrations project:

Open the first file that doesn't have the "_Undo" prefix. Notice how it has the incremental change to create the new table! Fantastic!

Now go and have a look at the "_Undo.sql" file and notice how it has the "Drop" table script. This will be used for the rollback strategy in the CI and CD in the future to be able to rollback to previous builds!

Thats it for now. This was a very practical guide on how to conduct sql migrations as a developer.

In the next part we will be looking at how we can use CI and CD tools to handle the deployments, and how it should work between team members developing against the same database.

profile for TheDaveJay at Stack Overflow, Q&A for professional and enthusiast programmers

SqlMigrations aka Continuous Integration & Deployments for MS SQL - Part I

Introduction and Background

Trying to handle SQL database changes manually is nothing but challenging and problematic. I have never witnessed a manual deployment on a sql database be a success. The main reason is due to "Oh I forgot to include something" or "I didnt run it in the right steps"!

MS-SQL databases have always been difficult when it comes to Continuous Integration and Continuous Deployments.

There are really rich tools such as the Red Gate SQL suite that can try make your life easier, but these things come at a cost and you also find that you can only use Red Gate source control etc.

This doesn’t help me because I have already got a project in source control with all its history etc. 
I also don’t want to be confined to the deployment manager they have, as I am already using Octopus Deploy (really awesome stuff) to manage my deployments.

There is also the alternative SqlCompare.exe which comes with the SSDT tools for free, but like Red Gate, the delta script for updating the database is created on the fly when comparing the 2 databases!

This to me doesn’t work because of the following reasons:

  • The change script cannot be tested and verified to work on all servers. There is the risk that the script generated between dev and the CI server wont be the same as the script generated between the CI and the QA / Staging/ Production servers. I want 1 script to run them all. SqlCompare.exe does have the ability to “script” the change to file, but how do we then manage that script?
  •  Working in environments disconnected from source control server -  We have a “strong hold room” of which no CI tools such as Team City can connect to, so only USB sticks can be used to do the deployment! J

One of the best things I have seen when it comes to handling upgrade migration scripts is Entity Framework Code First Migrations, but I am not keen to use code first for large enterprise applications. 
I have my database in TFS already using a SQL Project, and I don’t want to have to go change all my code to hook-up EF code first.

I thought to myself how grand it would be to be able to mimic the power of having small increment scripts like in EF migrations, and still harness the power of SqlPackager.exe and SqlCmd.exe.

The only reason I need SqlCmd.exe is because of the scripts the sqlpackager.exe generates – they can only be run by sqlcmd.exe, which I think is a good thing personally but that debate is not part of this article.

Another really difficult part which we will explore is the database rollback. This is a huge challenge of which most Ops guys will care about. As a developer, you should have the ability to rollback a database to its previous version in case a deployment goes pear shape.


Before we start, I want to get a list of objectives together that I want to achieve before we start looking at my proposed solution.
  •      Must be easy to deploy and rollback upgrade scripts
  •           Must be easy to generate update and rollback scripts
  •           Must be able to be deployed from a cmd driven approach for things like Octopus deploy

In Part II, I will take you through a practical step by step example on how to using the SqlMigrations tool I created (available on Nuget). It is still in beta, but will be investing more time to make it great.

Wednesday, 19 December 2012

No Pain - No Gain

No Pain - No Gain (or so they say)

I am recently learning that the saying “No Pain – No Gain” really depends on the software you are developing and the culture of the company you may have recently joined. I believe this phrase can be broken into 3 rather more accurate sayings which also depict a journey that a 10 years old start-up company could expect (base on my experience so far) when trying to move to a well-established company that can stand with its head high and proud.

Step 1 in the journey : Pain – No Gain

For the past 6 months, I have witnessed really painful development practises, long winded and very complex deployments, framework architecture which is so swollen with already existing FREE off the shelf software, and to end it off, a bastardised agile methodology.  “Pain – No Gain” sums up the company’s future.
If we have to leave the companies software and culture in its current state, it will collapse on itself.  It’s becoming more and more difficult to meet the customers’ requirements because the underlying framework is over engineered, and was designed by junior developers, and self-acclaimed architects at the time, and was never intended to cope with the modern requirements of today.
This part of the journey lasts the longest. It usually requires people of strong will and determination to break out of this phase. Skilled developers and architects that join are usually gone within 2 weeks because it takes too long to get over the “change inertia”. This is also years of turning a blind eye to the issues thinking they will go away.

Step 2 in the journey : Pain –  Gain (Feel the burn!)

Once we managed to get the right resources in place to start the change, things become more painful. One of the first things we tried to correct, which is critical to the business process is implementing SCRUM into the development team.

As most people who have implemented Agile methodologies in their work environment would know,  is that Agile has this amazing ability to show the true issues happening in a company - and this was no different for us. This is not a bad thing, and should not be feared. The more we know about what’s going wrong, they more we are able to fix it – evolving “Pain – No Gain” to “Pain – Gain”.

This is where Pain starts turning into Gain. The gain we are achieving may not affect the customers yet, but it’s definitely going to help with the future. This step can last a year or 2 due to the following factors:

<!--[if !supportLists]-->·         Spreading the cost of redevelopment and migration in a low risk fashion
<!--[if !supportLists]-->·         <!--[endif]-->Changing a company’s culture is hard work – there is some Stockholm syndrome involved for certain individuals. This may sound weird, but the guys who worked with the original framework have come to love it  - even though its killing them
<!--[if !supportLists]-->·         <!--[endif]-->Implementing Agile methodologies take time, and usually only start coming right after having at least 6 sprints. Unconscious habits only happen after doing it for long periods of time

Step 3: No Pain –  Gain (Now what?)

This is the final step that all companies should strive for in the journey. This is the part where all the right practises are being followed; the software is amazing to work with for the developers. The long winded deployments are done at the click of a button and the users get an amazing user experience.
Unfortunately this step is hard to come by in the real world – but it is achievable. Sometimes it happens unnoticed because we are always busy. When you get to the “ what” point, let me know. I would be interested to see if you end up back at “Pain - No Gain”.

Thursday, 12 April 2012

Custom logging handler for NLOG

Just a quick one - below is a custom Nlog helper which offers a centralized location for logging in a windows service. Its compatible to use with elmahs add proc. It allows you to easily log to multiple NLog settings using a flagged enum.

Class code (Must add the nlog dll as a reference)

 using System;
    using System.Collections.Generic;
    using System.Configuration;
    using System.Linq;
    using System.Security.Principal;
    using System.Text;
    using NLog;
    using PixelTrackingQueueService.Exceptions;
    public class LogHelper
        public static LogHelper CurrentHelper = new LogHelper();

        public enum LogType
            Error = 0x01,
            CriticalError = 0x02,
            Warning = 0x04,
            Info = 0x08,
            Debug = 0x10,
            Trace = 0x20

        public void Log(string message, LogType logType, Exception ex = null, Type currentType = null)
            Logger logger = (currentType == null)
                                ? LogManager.GetCurrentClassLogger()
                                : LogManager.GetLogger(currentType.FullName);

            Action<string> log = null;
            Action<string, Exception> exLog = null;

            foreach (LogType flag in Enum.GetValues(typeof(LogType)))

                LogType logLevel = (flag & logType);
                LogLevel level = LogLevel.Off;
                switch (logLevel)
                    case LogType.Warning:
                        level = LogLevel.Warn;
                    case LogType.CriticalError:
                        level = LogLevel.Fatal;
                    case LogType.Error:
                        level = LogLevel.Error;
                    case LogType.Info:
                        level = LogLevel.Info;
                    case LogType.Trace:
                        level = LogLevel.Trace;
                    case LogType.Debug:
                        level = LogLevel.Debug;

                DoLog(logger, message, level, ex, currentType);

        private static void DoLog(Logger logger, string message, LogLevel level, Exception ex = null, Type type = null)
            LogEventInfo info = new LogEventInfo(level,
                                                ((type == null) ? typeof(LogManager).FullName : type.FullName),
                                                message,null, ex);

            info.Properties["ApplicationName"] = ConfigurationManager.AppSettings["ApplicationName"];
            info.Properties["Host"] = System.Environment.MachineName;
            info.Properties["Type"] = (ex == null) ? level.ToString() : ex.GetType().FullName;
            info.Properties["CurrentUser"] = WindowsIdentity.GetCurrent().Name;
            info.Properties["Source"] = ex == null ? level.ToString() : ex.Source;

            string error = ex == null ? message : ex.ToString();
            string stackTrace = ex == null ? level.ToString() : ex.StackTrace;

            info.Properties["Error"] =
                    @"<error host=""{0}"" type=""{1}"" message=""{2}"" source=""{3}"" detail=""{4}"" ><serverVariables>
                        <item name=""SERVICE_NAME""><value string=""{5}""/></item>              
                        <item name=""LOGON_USER""><value string=""{6}""/></item>



        private Logger GetLoggerFromType(Type currentType)
             return  (currentType == null)
                                ? LogManager.GetCurrentClassLogger()
                                : LogManager.GetLogger(currentType.FullName);

 Nlog Config (with elmah proc) 

Add this file to your solution, calling it Nlog.config. This is the file Nlog will look for:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns=""
      xmlns:xsi="" >

  for information on customizing logging rules and outputs.
    <!-- add your targets here -->
    <target xsi:type="File" name="f" fileName="${basedir}/logs/${shortdate}.log"
            layout="${longdate} ${uppercase:${level}} ${message}" />
    <target name="elmah" xsi:type="Database" keepConnection="true" useTransactions="true"
        dbProvider="sqlserver" connectionString="Data Source=im-dev01\;Initial Catalog=Elmah;Integrated Security=False;User ID=Elmah;Password=1234;">
        SELECT @time
        EXEC ELMAH_LogError @ID ,'${event-context:item=ApplicationName}', '${event-context:item=Host}',
        '${event-context:item=Type}', '${event-context:item=Source}', '${message}','${event-context:item=CurrentUser}','${event-context:Error}', 1, @time
    <target xsi:type="Mail" name="mailman" smtpServer="Mail.MailService.local" from=""
            html="true" to=""
            subject="Service Error"
            body="&lt;h1&gt;Date and Time&lt;/h1&gt;${longdate}    &lt;h1&gt;Message&lt;/h1&gt;${message} &lt;h1&gt;Exception Details&lt;/h1&gt;
                  &lt;h2&gt;Error Details&lt;/h2&gt; ${exception:format=ToString} &lt;h2&gt;Stack Trace Details&lt;/h2&gt; ${exception:format=StackTrace}" />
    <target name="TraceLog" xsi:type="File" fileName="Logs/Traces.log" createDirs="true"/>
    <target name="ErrorLog" xsi:type="File" fileName="Logs/Errors.log" createDirs="true" layout="${message}\r\n${exception:format=ToString}"/>
    <target name="WarningLog" xsi:type="File" fileName="Logs/Warnings.log" createDirs="true"/>
    <target name="InfoLog" xsi:type="File" fileName="Logs/Info.log" createDirs="true"/>

    <logger name="*" level="Fatal" writeTo="mailman" />   
    <logger name="*" level="Error" writeTo="elmah" />
    <logger name="*" level="Info" writeTo="elmah" />
    <logger name="*" level="Trace" writeTo="elmah" />
    <logger name="*" level="Warn" writeTo="elmah" />
    <logger name="*" level="Debug" writeTo="elmah" />

Example on using the code:

LogHelper.CurrentHelper.Log("Error has occured", LogHelper.LogType.Debug | LogHelper.LogType.CriticalError, exceptionObject, this.GetType());       

will log to both elmah and send a mail at the same time.

Network tracing for an app without code!

There have been times when I needed to dump the incoming and outgoing packets at a network level.

The simplest way and, which is quite efficient was to add System.Diagnostics to you .config file of the application.

Below is an example:

      <source name="System.Net" tracemode="includehex" maxdatasize="1024">
          <add name="System.Net"/>
      <source name="System.Net.Sockets">
          <add name="System.Net"/>
      <source name="System.Net.Cache">
          <add name="System.Net"/>
      <source name="System.Net.HttpListener">
          <add name="System.Net"/>
      <add name="System.Net" value="Verbose"/>
      <add name="System.Net.Sockets" value="Verbose"/>
      <add name="System.Net.Cache" value="Verbose"/>
      <add name="System.Net.HttpListener" value="Verbose"/>
      <add name="System.Net"
    <trace autoflush="true"/>

Below gives a brief description on what the different sources will trace for:


Some public methods of the Socket, TcpListener, TcpClient, and Dns classes


Some public methods of the HttpWebRequest, HttpWebResponse, FtpWebRequest, and FtpWebResponse classes, and SSL debug information (invalid certificates, missing issuers list, and client certificate errors.)


Some public methods of the HttpListener, HttpListenerRequest, and HttpListenerResponse classes.


Some private and internal methods in System.Net.Cache.

Self installing c# windows service - the safe and easy way.


This post is on how to create a windows service that is self installing using c# and the managed installation features that comes with .net.

The source code for this example can be found here: GitHub

Step1 - Creating the new project

First of all, we need to create a new project called SelfInstallingService. I am using visual studio 2010 (.net 4) for this project. This will be a standard console app.  There are many tutorials on how to create a console app, so if you are unsure, I suggest you google it :)

Step2 - Adding the references to the project.

 You will need to add the following references to your project:

- System.ServiceProcess
- System.Configuration.Install

Step3 - Creating the service

  1. Open the Program class and extend the class by inheriting from the ServiceBase class.
  2. Add a public static string to hold the current service name
  3. Create a new constructor in the Program class, and put the code you want to run there i.e. code that      normally would go into the main method

You code will look as follows:


     4. Add the following methods to the class and resolve the namespaces for the new classes being used:

        protected override void OnStart(string[] args)
            //start any threads or http listeners etc

        protected override void OnStop()
           //stop any threads here and wait for them to be stopped.

        protected override void Dispose(bool disposing)
            //clean your resources if you have to

        private static bool IsServiceInstalled()
            return ServiceController.GetServices().Any(s => s.ServiceName == InstallServiceName);

        private static void InstallService()
            if ( IsServiceInstalled())
            ManagedInstallerClass.InstallHelper(new string[] { Assembly.GetExecutingAssembly().Location });

        private static void UninstallService()
            ManagedInstallerClass.InstallHelper(new string[] { "/u", Assembly.GetExecutingAssembly().Location });

5) Update the main method in the class to have the following code:

static void Main(string[] args)
            bool debugMode = false;
            if (args.Length > 0)
                for (int ii = 0; ii < args.Length; ii++)
                    switch (args[ii].ToUpper())
                        case "/NAME":
                            if (args.Length > ii + 1)
                                InstallServiceName = args[++ii];
                        case "/I":
                        case "/U":
                        case "/D":
                            debugMode = true;

            if (debugMode)
                Program service = new Program();
                Console.WriteLine("Service Started...");
                Console.WriteLine("<press any key to exit...>");
                System.ServiceProcess.ServiceBase.Run(new Program());

You can see that the following command line arguments can be input:
- /NAME - sets the name of the service
- /I - Installs the service
-/U - Uninstalls the service
-/D - Runs the service in debug mode i.e. a Console Application

Step4 - Creating the installed

1) Add a new class called CustomServiceInstaller and inherit from the Installer.
2) Add a class attribute "RunInstaller" which takes a boolean value. Set this to value to True.
3) Resolve any namespace issues in the class.

You code will look as follows:

    public class CustomServiceInstaller : Installer

4) In the same class, create a constructor that creates the new process info and service:

        private ServiceProcessInstaller process;
        private ServiceInstaller service;

        public CustomServiceInstaller()
            process = new ServiceProcessInstaller();
            process.Account = ServiceAccount.LocalSystem;

            service = new ServiceInstaller();
            service.ServiceName = Program.InstallServiceName;



Step5 - Your done!

Your service is now complete. To run your program as debug, you can add the /D to your debug commands in the project settings, or you can set "debugMode" to true.

Below is an example of the command lines you can use to install and uninstall the service:

Install a service with the default name:

SelfInstallingService.exe /I

Uninstall service with the default name:

SelfInstallingService.exe /U

Install a service with custom service name:

SelfInstallingService.exe /NAME SelfIntallingServiceCustom /I

Uninstall service with custom name:

SelfInstallingService.exe /NAME SelfIntallingServiceCustom /U

Run the service as a console app

SelfInstallingService.exe /D

Friday, 22 July 2011

Hash matching in .net and SQL

I was faced with a problem where passwords were being stored in sql as a hash, and each time a new password was needed, I had to write code to generate a new password --FAIL!

Anyway, I decided to sort this out.

 MSSql offers the Hashbytes function to create a hashed byte array from a string, as follows:

SELECT HashBytes('MD5', 'teststringtohash')

That was easy! Now, since the passwords are being stored as a string, all you have to do to convert it is the following:

SELECT CONVERT(NVARCHAR(32),HashBytes('MD5', 'teststringtohash'),2)

Which returns '5B076E2B8572A2A80645BFD1D4046D23'.

Now, to generate the same hash in c# so we can compare to the database, all you have to do is use the HashAlgorithym class, compute the hash and convert it to a hexstring. Since we are using a MD5 hash in SQL, the HashAlgorything will also have to be a MD5 hash. See code below:

1) Create the hash algorithym.

HashAlgorithm alg = MD5.Create(); 
2)Compute the hash, and use the BitConverter.ToString to get the hex string. The hex string contains '-' between each byte, so remove them to match sql! 
byte[] hashedData = alg.ComputeHash(Encoding.UTF8.GetBytes("teststringtohash"));            
string password = BitConverter.ToString(hashedData);
password = password.Replace("-", ""); 

All you have to do from here is make sure that password == sqlpassword :)

Friday, 25 March 2011

Dependancy Injection / Inversion of Control (IOC) in .net c#


DI is one of the most  things that is assumed to be difficult, and without understanding it can be one of those things which you will never try again if you get it wrong. I myself was one of those people. 

I have been working with a colleague who insists on using it, and being open minded to the suggestion again, I asked him to show me what its about  - in a correct implementation. He also gave me a fantastic book to read by Mark Seemann called Dependency Injection in .net.

I am now a fan of DI, and can see the use for it. This article explains DI, and how to do it practically, along with the reasoning behind it.

What is Dependency Injection

DI by definition is "a set of software design principles and patterns that enable us to develop loosely coupled code" 

Within OOP, this means that collaborating classes should rely on the infrastructure to supply the necessary services. This leads to code that is more maintainable and easier to test.

What is Loose Coupling and how does it fit with DI?

Loose coupling in a design pattern is "Programming to an interface, and an implementation". Following this design pattern makes your code more extensible and therefore leads to higher maintainability. 
DI is a technique that enables Loose Coupling in your code.

DI and Loose Coupling in the real world

The best "real world" analogy I have come across when it comes to explaining Loose Coupling is the  plug and socket.

For anyone that has gone to a hospital, they would most likely seen something like this:

TV wired directly to the wall

As you can see, the TV is wired directly to the wall. Normally it would make no practical sense to have your electrical device wired directly to the wall outlet. If your device broke, you would need an skilled electrician to come and disconnect the device from the wall, and replace it for you. This would come at a practical cost.

What would make more sense is to have a plug an socket, where any old handy man can come in, unplug it, and swap it out - meaning no electrician required :
Simple Plug and socket

This kind of analogy can apply to our programming. The socket in this case is an Interface defining what shape the plug needs to be. Not only does this make it more maintainable, it means we are no longer constrained to one type of device. We could easily swap it out for a new version HD 3D tv, or even a projector of some sorts.

To give a real life example as to how to implement this within your application, think about caching.


You are developing a web application which only sits on one server. You know that within one year, the site will grow to be on a web farm and some of the caching needs to distributed evenly. For now you don't have the time to develop and test that the caching works, so you are just going to stick to the "Session" cache.

You want to be able to swap this caching mechanism out in the next release to be a distributed caching system, such as SQL or Appfabirc, NCache etc.


Below is an example of how we would structure the application so it has a loosely coupled caching mechanism:

1) Create an interface that has the Add and Get from cache functionality:

 public interface ICacheProvider
        void AddToCache<TValue>(string key, TValue value);
        TValue GetFromCache<TValue>(string key);

As you can see, we have a generic "AddToCache" which takes a key and object. We also have the "GetFromCache" which takes a key and returns a generic type.

2) Create an implementation of the ICacheProvider. In this case it will be something that adds to the Session Cache, so we will call it SessionCacheProvider:

    public class SessionCacheProvider : ICacheProvider
        public void AddToCache<TValue>(string key, TValue value)
            Session[key] = value;
        public TValue GetFromCache<TValue>(string key)
            return (TValue)Session[key];

3) In your code behind, implement the ICacheProvider

public class SomeCodeBehindFile
        ICacheProvider cacheProvider;
        public SomeCodeBehindFile(ICacheProvider cacheProvider)
            this.cacheProvider = cacheProvider;
        public void AddItemToCache(string itemClicked)
            this.cacheProvider.AddToCache<string>("SomeItemKey", itemClicked);
        public string RetrieveItemFromCache()
            return this.cacheProvider.GetFromCache<string>("SomeItemKey");

 As you can see, the SomeCodeBehindFile class is completely unaware of where the cache is, or what its doing with this cache. All it knows is that is can add to some cache, and get data from some cache.

What ever creates an instance of this class can also pass in any cache provider they want. In this case, we would pass through an instance of the session cache provider: i.e.

SomeCodeBehindFile scbf = new SomeCodeBehindFile(new SessionCacheProvider());
This method of setting the implementing class is called Constructor Injection. There are different techniques as to how to set the implementation class. We will cover the different ways later. For now, just know that we can easily switch the implementation to be a AppfabricCacheProvider without having to make any code changes in the SomeCodeBehindFile class - this makes it highly maintainable!

The mis-conceptions of DI

There are 4 common misconceptions about DI, which I myself fell prey to. Below are the actual facts on DI
  • DI IS NOT only relevant for late binding. 
Late binding refers to the ability to replace parts of an application at runtime. Late binding is only one of the aspects of DI
  • DI IS NOT only relevant for unit testing - if anything it promotes it - if you so choose to do it (which you should)
  • DI IS NOT an abstract factory
This is one of the biggest myths I had to unlearn. DI itself is not a  dependency  factory. 
It doesn't resolve dependency objects for you. It is not a Service Locator - DI is in fact the exact opposite.
DI is a way to structure code, so that the developers never imperatively ask for the dependencies, but in fact force consumers to supply them.
  • DI DOES NOT require a DI container.
The DI container is an optional library the can make life easier when assigning components on the application wire-up. Its is by no means a required library. Latter on we can look at the differences between using a container and doing it manually.


Thursday, 10 March 2011

Sql Connection Pooling in .Net

Sql Connection pooling is one of the hidden heros that not many developers I know will take time to look into.

Granted, ADO.NET covers up so much of whats really going on, and takes care of of the nitty gritty details, so as a .net developer, you should have to worry too much -> WRONG! When it comes to optimizing performance, you NEED to know whats really happening!

Sql Connection Pool Basics: 

Fact: Opening a database connection is a resource intensive and is a time consuming operation.

A connection pool is a way to cache sql connections that are maintained so that the connections can be reused when future requests to connect to a database is needed. By reusing active database connections,  the performance of .Net applications, be it Web or Windows,  is increased!

Connection Pools in .Net are managed and maintained by the Connection Pool Manager.  When a new connection request come in, the Connection Pool Manager will check if the current connection pool contains any unused connections. If there are connections available it will return it, otherwise if all connections currently in the pool are busy and the maximum pool size has not been reached, a new connection is created and added to the pool. When the pool reaches its maximum size, all new requests are queued until a connection in the pool becomes available or the connection attempt times out.

Controlling the connection pooling behavior can be done via the use of connection string parameters. Below are the most common parameters that control most of the connection pooling behavior:

  • Connect Timeout - controls the wait period in seconds when a new connection is requested (As mentioned above), if this timeout expires, an exception will be thrown. The default connection timout is 15 seconds.
  • Max Pool Size - specifies the maximum size of your connection pool. By default it is 100 connections. 
  • Min Pool Size - specifies the initial number of connections that will be added to the pool upon its creation. By default, this value is 0.
  • Pooling - controls whether or not to use connection pooling. The default value is true.
Most Common Issues and Resolutions

The following exception is most probably the most common issue:
"Exception: System.InvalidOperationException
Message: Timeout expired.  The timeout period elapsed prior to obtaining a connection from the pool.  This may have occurred because all pooled connections were in use and max pool size was reached.
Source: System.Data

at  System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction)
   at System.Data.SqlClient.SqlConnection.Open()

This exception usually happens due to Connections Leaks - a condition where your application does not close its database connections correctly and consistently.

When you "leak" connections, they remain open until the garbage collector (GC) closes them for you by calling the Dispose method on your SqlConnection object. Unlike the old ADO, ADO.NET requires you to manually close your database connections as soon as you're done with them. As we cannont rely on the GC to do the work for us, as GC can take a very long time to collect, we have to do this in code. There are 3 ways to achieve this:

Using the Close() method:

SqlConnection conn = new SqlConnection(myConnectionString);
//execute query

Using a "Try...Finally" clause:

SqlConnection conn = new SqlConnection(myConnectionString);
   //execute query

and using the "using()" statement:

using (SqlConnection conn = new SqlConnection(myConnectionString))
//execute query

If you use SqlDataReader, OleDbDataReader, etc., close them. Even though closing the connection itself seems to do the trick, put in the extra effort to close your data reader objects explicitly when you use them!

For more information, you can read: