What’s Database Drift?

The first time I heard someone mention drift at Redgate, it made sense to me. After all, in the context of the schema of your database, I can see the code “drifting” away from what you expect. Someone noticed this was a problem when implementing continuous delivery and DLM Dashboard was born.

I grew up on the water, and I learned that if you don’t anchor these things down, they move.


In a database, our code can be the same way. Actually, all code is like this, which is why most professional software developers learn to use a Version Control System (VCS). Code changes, and you want to have some way to anchor down the code you need to work with.

We don’t want to prevent changes to a database with some heavyweight process. I’ve worked in those places, and it’s an impediment to getting business done. On the other hand, we can’t have uncontrolled changes. I’ve been in those environments, and apart from the instability for the business, this creates bad relationships between technical people.

That’s one reason we build DLM Dashboard at Redgate Software. We recognized that tracking and being aware of what’s changed is important. It allows you to respond, and respond quickly if needed, but doesn’t prevent changes.

More importantly, you can track down those items that might have drifted to a new location and feed the changes back to development. Even if they appear to be all spread out.


In Practice

How does this work? Well, download DLM Dashboard and get started. It’s free and for each installation, you can monitor up to 50 databases.

Once that’s installed, you can add the various databases that make up your pipeline for monitoring. For example, I’ve got a SimpleTalk pipeline with four databases in my environments, as shown here:

2016-07-21 13_55_39-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

There are for the following purposes

  • Integration – Gets an integrated build of the database from CI, with all developers changes to the head of the branch of development.
  • Testing – A particular version of Integration is deployed for additional testing.
  • Acceptance – A pre-production environment, where the database upgrade is practiced.
  • Production – The live, final environment for database changes.

I could have other databases in my pipeline (DR, training, beta, etc.) , or even multiple databases at each stage. However, this is a fairly simple pipeline.

Now, let’s suppose I realize we have an issue in production. I need to change a stored procedure that’s got a bug. Someone forgot a WHERE clause in this procedure, and I need to make an emergency fix.

2016-07-21 13_58_36-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

Let me add code and  recompile the procedure.

2016-07-21 15_05_03-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

This is a new version of my procedure on production (note the RED outline, thanks SQL Prompt). I have drift. My schema is not in the same state as it was. Production now has a red note, with the drift image.

2016-07-21 15_21_42-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

This change isn’t recognized by DLM Dashboard. The version of each object, of all the code, is stored and tracked. The previous version was 50.17, which is the version of Acceptance. If I click the “Review”, I can see the details. At the top, I see the old schema version, as well as a summary of what changed and by who. I can name this schema if I want, and add comments about the changes.

2016-07-21 15_39_20-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

Scrolling down I can see the actual code that changed. Using the style that SQL Compare and other tools use, I see the additions to the code highlighted, showing what was in the previous version as well.

2016-07-21 15_39_29-SalesDemo-2015-12-01-1745-export-i-fgod4b6h - VMware Workstation

At this point. I can copy this code, put it back to development, and run it through the same cycle as all my normal development changes, including the testing that might ensure my “hotfix” is actually the change that was needed.

Depending on my development process, this might be deployed as a new change, or it might be ignored when the next deployment takes place. Either way, my process should be able to handle this appropriately. If not, I need to better manage my development.

I think DLM Dashboard has a number of uses, but certainly the capture of changes to production, ensuring you’re aware of what changes, is a valuable one.

68_dlm dashboard red wfill

I hope you’ll download it today, since it’s free, as in beer, and it’s worth a test in your environment to see how it can help you.

Photo Credits

Rename a Primary Key–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Having system named objects is one of those things that people may debate, but as you move to more automated and scripted processes can cause you issues. In the case of a Primary Key (PK), if you do something like this:

      OrderID INT IDENTITY(1, 1)
                  PRIMARY KEY ,
      OrderDate DATETIME

What you get is something like this:

2016-06-27 13_58_47-SQLQuery4.sql - (local)_SQL2016.EncryptionDemo (PLATO_Steve (60))_ - Microsoft S

When you compare that with the same table in another database, what’s the likelihood that you’ll have the PK named PK__OrderDet__D3B9D30C7D677BB4? Probably pretty low.

This means that if you are looking to deploy changes, and perhaps compare the deployment from one database to the next, you’ll think you have different indexes. Most comparison tools will then want to change the index on your target server, which might be using this technique. Or the choice might be something that performs much worse.

What we want to do is get this named the same on all databases. In this case, the easiest thing to do with rename the constraint on all systems. This is easy to do with sp_rename, which is better than dropping and rebuilding the index.

I can issue an easy query to do this:

exec sp_rename ‘PK__OrderDet__D3B9D30C7D677BB4’, ‘OrderDetail_PK’

When do this, I see the object is renamed.

2016-06-27 13_59_20-SQLQuery4.sql - (local)_SQL2016.EncryptionDemo (PLATO_Steve (60))_ - Microsoft S

This table has over a million rows, and while that’s not large, it does take time time to rebuild the index. With a rename, the change is to the metadata and takes a split second.


These quick, easy, administrator items are great to blog about. As an exercise, if this table has millions of rows, how much longer does the index rebuild take?

This is a great topic for you to write about (or learn about) and show how you can better administer your SQL Server databases.

ReadyRoll and Octopus Deploy Webinar

I’ll be hosting a webinar next week, Thursday, July 28, with Daniel Nolan (t) (founder of ReadyRoll and fellow Redgate employee) and Damian Brady (b | t) of Octopus Deploy.

We’ll be talking deployment, with demos to show you how to work with ReadyRoll for your deployment and then how to build a package to be automatically deployed though Octopus Deploy. Both are great products that I enjoy working with.

Register today if you’re interested in seeing how migration based development and deployment works with ReadyRoll and Octopus Deploy.

Growth or Control – #sqlpass

I’m disturbed.

I read an announcement from the PASS organization this week that bothers me. I am less upset with the content than the manner in which it was presented, though I am not pleased with either.

Making SQL Saturday Sustainable was released this week, with a few changes to the SQL Saturday franchise. You can read the piece, but there are two fundamental changes being made for 2017 and beyond:

  • [PASS] will be reducing this to amount to $250 and targeting those cities/locations that really need the financial support to grow, rather than well-established events.
  • [PASS] is implementing a new 600-mile radius and concurrent event timing restriction to reduce competition between individual SQLSaturday events in Canada/USA.

There are comments on these notes at the bottom of the announcement.

Heavy Handed and Ham Fisted

I’m not sure why, but it seems that no matter who has been elected to the PASS Board of Directors seems to believe that they must make decisions without communicating, collaborating, or consulting the community. Decisions made year after year, on various topics (Summit location, pre-con speaker criteria, code of conduct, org name change, etc.) are announced without and sort of discussion or communication with the community they serve.

We are a community of diverse and disparate individuals. We won’t agree on many topics as a group, but we can discuss and debate them. We can have open and transparent disagreement about how to handle situations. Ultimately the board must decide (after a vote), but I’d hope and expect, really, I demand, that they solicit views before making decisions. It’s ridiculous in this age of instant communication, of no-cost publishing, of email votes, to not propose changes, gather feedback, and then make a decision.

Doing otherwise is as disrespectful as it is heavy handed.

This is not specific to this decision, but rather the way the PASS BoD has operated across a decade. I will say that I don’t think the Board makes all bad decisions. But they have made some that could have been avoided with a little more collaboration with those affected, us, the voters.

The Decisions

What about these specific decisions? Well, I have a few thoughts.

I do think these decisions are intended to slow the growth of SQL Saturdays. There is something to be said for this, in that sustaining fewer events for the long term is easier to manage and may not require replacing Karla Landrum as an evangelist. This benefits vendors, including my employer, but it’s not great for attendees. There will be fewer chances to learn, and smaller events in places that will never grow to 200+ people will likely die. Or move to a Code Camp/different model, abandoning SQL Saturday. That makes me sad. This appears to be less about growth, and mostly about controlling fewer events.

First, on the financial aspect. Let me say I know there are events that raise well over $20k (I have no idea what the record is). I also know there are events that have only raised $2-3k. The low level might even be lower. It doesn’t matter what you think about event cost, some events don’t need $500 and there is a limit to what PASS can support. I think not committing to funds for every event is good. However, what I’d like is flexibility.

Don’t set a hard limit, and note that events could get up to $500, but PASS reserves the right to not provide any financial support.

Now, the 600mi radius for events. Well, it’s not just 600mi, but also 600mi + a weekend either way. Let’s see 600mi from a few cities. Rob Volk tweeted this image:

2016-07-19 20_59_24-Rob Volk on Twitter_ _@codegumbo Nashville_Salt Lake City on consecutive weekend

This is 600mi from Nashville and Salt Lake City. That means that most of the US couldn’t have an event in a month with these two, at least not if there were a holiday weekend. Certainly some large cities are still here (major TX cities, NY area, etc), but most of CA, the West, the Midwest are all cut off.

Know what cities are within 600mi of each other? Vancouver and Portland. These two cities have held events book-ending the PASS Summit to allow speakers from overseas (or US speakers/attendees) to go to one or both events around the Summit. That’s now cut off.

There are probably others, but I will say this. The hardest part of getting an event going is the venue. There are more than a few organizers and events that have worked out deals with schools or other events. Baton Rouge and Oklahoma City? One might need to move. Those markets have had events in the same month, within 600mi of each other.

I have always thought the 400mi limitation was ridiculous. Events can happen within 400mi if they can manage to find speakers and raise funds. Rather than arbitrarily limit events by distance, why not limit by viability, or by fundraising?

I love that we’ve grown to over 100 events in a year. I think it’s great, but I also think the competition and effort put into making large events is crazy. I don’t like 10+ tracks. I don’t like 30+ speakers. I don’t like $20k spends that include fancy dinners.

The aim here is to train people, teach them something, inspire them, and expose them to options for SQL Server, including vendors. I get that vendors struggle to justify going to so many events. That’s fine. Make choices. I suspect vendors will support larger events and smaller ones will struggle.

I’ll put down thoughts, but smaller event, just be smaller. Don’t buy dinner. Don’t make shirts. Don’t try to provide a great meal for attendees.

Teach them. That’s what the event is about. Stop there and you’ll be fine. Everything else is nice to have, but in no way necessary.

And PASS, please, stop believing you need to earn your position and make hard decisions as a small group. Work with the community, with proposals, not decrees.

Deploying Dynamic Data Masking Changes

I wrote a short piece over on the Redgate blog that covers how SQL Compare can catch and build a script to move Dynamic Data Masking (DDM) code from one database to another.

As the surface area of SQL Server programming changes, moving these items from one instance or database to another becomes important. I know that Microsoft doesn’t always make this easy or smooth, so we need tools to help.

DDM changes are fairly simple, but I can see these being implemented in lots of code, as they’re an easy way to mask an obscure some data in an app without code changes, but we need to be sure these changes get deployed from development to production.

Or production to development, if you’re the kind of person that works that way Winking smile. Don’t worry, SQL Compare can help there, too.

Adding the ReadyRoll Extension in VSTS

As I’m doing a little development with ReadyRoll, I’m documenting some of the things I learn about the product.

A few weeks ago I needed to do a demo with ReadyRoll for Redgate Software. I built a new project, and wanted to perform a build in Visual Studio Team Services (VSTS). However, I realized that I needed to install the extensions for my project to work. This is pretty easy, but worth a short post.

If I create a new VSTS project, I get a blank, empty project page. 2016-07-15 11_30_27-Highlight

If I navigate to the Release page, of course, I have nothing there.

2016-07-15 11_38_51-Highlight

And when I look to add something to a release definition, I have a fairly small list of tasks from which I can choose. These are in alphabetical order, and there’s no “r” tasks.

2016-07-15 11_49_57-Highlight

I do need a build setup, but once that’s done, I still have the same list of three items. I want to add the ReadyRoll extension. To do that, I’ll click the shopping bag in the upper right of the screen. Once I do that, you’ll see I can browser or manage extensions.

2016-07-15 11_41_15-Highlight

I’ll choose Browse, which opens the marketplace. There are extensions for many, many developer tasks, and I’d encourage you to look through the list.

2016-07-15 11_41_45-Highlight

In my case, I’m looking for ReadyRoll, so I’ll enter Redgate to search.

2016-07-15 11_41_55-Highlight

There are three results, the first of which is ReadyRoll.

2016-07-15 11_42_02-Highlight

I can click on the box, and I’ll get more details on the extension.

2016-07-15 11_42_10-Highlight

The extension is free, but I do need ReadyRoll on my desktop to build a ReadyRoll project in Visual Studio. As the page notes, you can download a 28 day trial to play with the product if you like.

If I click “Install”, I’ll get a dialog asking me to confirm the installation to this account. Note there is a drop down if you have access to multiple accounts. The permissions needed are also listed.

2016-07-15 11_47_30-Highlight

Once this is installed, I get a confirmation:

2016-07-15 11_48_14-Highlight

And now when I go to add a task, I see ReadyRoll listed.

2016-07-15 11_49_11-Highlight

I click “Add” and then can configure the task.

2016-07-15 11_49_25-Highlight

That’s how easy it is to add ReadyRoll, or really any extension to your VSTS account.

Building locally from VSTS

One of the things that you need with a Continuous Integration server is that ability to build your software on some computer system and verify things work. With Visual Studio Team Services, using Visual Studio online, this seems to be a challenge.

The VSTS team has thought of this and includes the ability to target your builds on a hosted system in the VS cloud, where you don’t need anything installed. However the hosted build agents and servers have a limited amount of software that’s supported. SQL Server isn’t on that list.

However, there is another option. If you go to the control panel for your account, and click the Agent Pools tab, you’ll see something like this.

2016-06-28 18_03_29-Agents for pool Default - Microsoft Team Foundation Server

Notice the “Download agent” link. That’s what you want. As you can see, I’ve been testing and I have agents actually setup and registered on 4 machines. Here I’m going to add a fifth.

Once I download the file, I’ll extract it to see a list of files and folders.

2016-06-28 18_01_52-agent

What I first want to do is configure the agent, so I’ll enter ConfigureAgent from a command prompt. Note, I need to do this as an administrator level command prompt.

2016-06-28 18_08_35-cmd - ConfigureAgent.cmd (Admin)

In my case there are some settings, but I’m overwriting them as I rebuilt my machine. Once I hit Enter, I then get the chance to Authenticate.

2016-06-29 09_18_49-

After this, the old Agent appears. However, since I’ve rebuilt and rename this machine, I’ll change it. I answer a few more questions about configuring the agent properties. At the end I’ll also authenticate to the Azure cloud once again.

2016-06-29 09_21_09-cmd (Admin)

Now that things are configured, I can run the agent. I could set this as a service, but I prefer to know the agent is (or isn’t running) and see the output. I have set this as a service before, and it works fine.

All I have left to do is RunAgent.cmd and I have an agent running locally that takes instructions from VSTS.

2016-06-29 09_26_45-cmd - RunAgent (Admin)

If I go back to my control panel, I see a new agent.

2016-06-29 09_32_41-Agents for pool Default - Microsoft Team Foundation Server

I can also trigger a build. I happen to have one setup for a project that points to local instances. Here’s the build definition, which uses the VSTS Extension from Redgate to build a database.

2016-06-29 09_34_05-Microsoft Visual Studio Team Services

I can click “Queue Build” and a build will start.

2016-06-29 09_34_18-Microsoft Visual Studio Team Services

I see the build running in the console:

2016-06-29 09_34_31-cmd - RunAgent (Admin)

And online in VSTS if I want. The Build Agent sends logs back to the VSTS service as it’s working.

2016-06-29 09_36_19-Microsoft Visual Studio Team Services

This is a basic way to get a VSTS build working on your local machine with an agent. There is a lot more to configure if you need to, and if you need multiple agents, you can certainly pay for them with a different VSTS plan.

MeasureUp 2016

I’m heading down to Austin next weekend for MeasureUp 2016. This is a Microsoft stack conference that looks to help people build custom business software.  There are over 20 sessions, most of them devoted to .NET topics. I think I’m more excited to be in some nice company and have the chance to sit in on some interesting talks.

I’ve got the one database talk, Bringing DevOps to the Database. I’ll be looking at some of the ways in which we can change database development to be more agile, to fit in more with the way that most application developers build code.

This builds on work I’ve done in the past, as well as a lot of work that Redgate has done across the last 4-5 years.

If you’re in the Austin area, and you develop software, maybe you want to come down on Saturday for the event. It’s low cost, and the chance to meet with other software developers (and get out of the heat).

Register today for MeasureUp 2016 and come shake my hand.

Getting Table Change Scripts–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

One of the really basic things I think everyone should understand is how to get scripts from Management Studio (SSMS) and saving them. In fact, I’ve written that everyone should use this button and really not ever execute their GUI changes. Capture the script, save that, and automate things.

However, that’s not what this post is about. This post is about how you get a script to look at changes, or better understand how SSMS might implement your changes.

Editing a Table

Let’s say that you want to redesign a table, so you Edit it in the SSMS Table Designer. Here, you can see I have small table with a few fields.

2016-06-27 09_33_55-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail - Microsoft SQL Server Management

I want to rename the field with incorrect casing as well as insert an OrderDate column in the middle. I have made those changes below.

2016-06-27 09_34_31-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

Now, I’m not sure how these changes will be made in SSMS, and I certainly want to be careful in production. We want a script we can examine and approve.

Certainly, I could use something like SQL Compare to generate a script between two databases. That would include transactions and error handling and more. That’s my preferred method. However, since not everyone has SQL Compare (a mistake! Winking smile ), let’s just use SSMS.

Instead of saving, I’ll click this button.

2016-06-27 09_37_09-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

Or I’ll go to this menu item.

2016-06-27 09_37_52-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

Once I do that, after a warning, I get a script dialog.

2016-06-27 09_39_28-PLATO_SQL2016.EncryptionDemo - dbo.OrderDetail_ - Microsoft SQL Server Managemen

I can now save the script and then open it back  up in SSMS. I can see all the changes that the scripting engine thinks we should make.

2016-06-27 09_41_29-OrderDetail.sql - (local)_SQL2016.master (PLATO_Steve (57)) - Microsoft SQL Serv

This allows me to learn about one way to make these changes, as well as see things that might concern me, such as poorly named constraints and indexes.


This is a great productivity and learning technique, but also a core thing I’d hope most DBAs knew. You could certainly write about how you use this, or how this might have been helpful in a situation. Showcase your knowledge on this topic with the #SQLNewBlogger hashtag.

Getting Started with Database DevOps, DLM, and New Tools

Next Tuesday is the next installment of our DLM series from Redgate. I’m off the hook, with Grant and Arneh showing off Git, Team City, and Octopus Deploy as the main tools. Register today and see how you can get started.

We’ve been running this series for a year, with a focus on showing different tools being used to deploy database changes. Of course, we do showcase some Redgate tools from the Toolbelt, but the idea is to integrate with various other common development tools.

68_dlm dashboard red wfillIf you’re looking for an easy way to get started, consider downloading and trying DLM Dashboard, a free tool to audit and monitor your database schemas.

If you’ve missed any, you can see these combinations of Version Control, CI, and Release tools:

  Version Control Continuous Integration/Build Release Management/Deployment
Webinar link Git TFS Build in VSTS Microsoft Release Management
Webinar link TFSVC TeamCity Octopus Deploy
Webinar link TFSVC Jenkins Octopus Deploy
Webinar link Subversion (SVN) TeamCity Octopus Deploy
Webinar link TFSVC TFS Build in VSTS Microsoft Release Management

Feel free to watch any of these, or come to the next webinar and ask questions live. There are more resources in our DLM library.

DLM really can help smooth your development and deployment process, reducing mistakes, and allowing you to deliver enhancements to customers faster.