Custom Schemas

One of the things I seem to see more and more is an expanded use of schemas in their development efforts. This isn’t something I had done much in the past as almost half of my career was with SQL Server 2000 or earlier, where we didn’t really have schemas and we were loathe to tie too many objects to owners. However the last decade has had schemas as a first class citizen in all SQL Server versions since 2005.

I’ve seen a number of software packages use schemas, but it seemed more as a way of separating out development efforts than any well thought out architecture. However there have been a few interesting uses of schemas as a way of duplicating tables for various uses. ETL and auditing are two of the more common uses, though there have been others.

I ran into an interesting approach recently that I found interesting. A group of developers had added a new schema to separate out their custom objects from all other objects in the database. They did this as the database was created by, and maintained by a third party application. They wanted to create new objects, but wanted them separate from the vendor’s objects, and used a new schema.

I thought this was a good idea and wondered who else be doing something similar. This Friday, I wanted to ask the question of the rest of you.

Do you have a custom or reporting schema as a way to separate objects from those that aren’t under your development control?

Let us know if you’ve created some interesting use of schemas. Or are there reasons you don’t use schemas? Have you never considered the advantages of using schemas to group  objects in their own section of the database?

I think schemas can be a powerful way of grouping objects, applying separate security, and  organizing your database in a way that makes it easier for developers, and report writers, to manage the complex objects that may clutter up your database.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Using Automated Tests to Raise Code Quality


Agile development practices can speed the development of applications and increase the rate at which you can deploy features for your customers. But unless you include a high level of test coverage in your code, these practices can also increase the number of bugs that are released. Databases can be particularly challenging for developers. This session examines database refactorings that introduce bugs, which are detected by automated tests. This approach allows developers to rapidly fix their code before a customer is affected.

Level: 200


These are the demos shown in this talk.

  • Adding test data inline
  • Added test data in a procedure
  • Adding test data from a separate set of tables.
  • Exclusions to SQL Cop or other tests with Extended Properties.
  • Using FakeFunction
  • Using SpyProcedure
  • The boundary issues with multiple requirements for a function.
  • 0-1-Some testing
  • Catching dependencies.


Here are the downloads for the talk.

Learn to Use Filegroups

This editorial was originally published on May 12, 2011. It is being re-run as Steve is away at SQL Relay today.

In SQL Server, filegroups are a management technique that I don’t see many people using. It’s amazing how many people ask questions about filegroups on the discussion forums, often unsure of how they fit into a well architected SQL Server. I have tended to use filegroups mostly as a space management technique, when I need to add more disks to my server, but they can be used in many more places.

We continue to grow our data sizes all the time. While many databases are still measured in the single digits of gigabytes (or smaller), it is fairly common to find many database servers with over a terabyte of disk space. Our disks grow larger and larger, but it seems that data grows faster than disks, requiring larger storage subsystems all the time.

While our storage grows larger, the tolerance for delays shrinks and demands for better performance increase. That means that data professionals need to be more cognizant of not only how their code is written, but also how they design storage. Tiering storage is one idea that I think has merit, but one that requires some planning.

In SQL Server, we can’t split a table across filegroups. Or can we? We can partition a table (Enterprise Edition and higher), which can allow us to manage performance and storage appropriately. There is also the recommended practice of only having system objects in the primary partition and using separate filegroups for user data. That allows you to bring a partial database online, again, in Enterprise Edition only, while you restore different filegroups.

This isn’t the first thing I would recommend you learn about SQL Server, but as you advance your knowledge, you should better understand when and how filegroups can help you. You will use them at some point and being comfortable with a filegroup restore is one of the skills that separates the accidental DBA from the data professional.

Steve Jones

Querying Yesterday

One of the new features coming in SQL Server 2016 is the temporal table. It’s an interesting concept, one that many businesses have wanted for years. If you’re not sure what this is, we’ve got a collection of resources on Learning about Temporal Tables (and other features). Check them out and we’ll keep adding content as we find it.

Temporal tables give us some amazing capabilities, but at a cost. As with anything in computers, there is a cost for capability. In this case, we can look back at the view of our data as of a particular point in time. In many ways, this means that we don’t need to bolt on, or query into, auditing data.

However there are other costs. As with any auditing system, we potentially have substantial data that we need to manage somehow. Certainly we need to choose which tables to track. Even if we don’t have to build a process, we will have to deal with the cost of storage and provisioning, as well as determining the retention periods. We also need to really depend on the system times for our various instances to be in sync.

I think that this is a needed, and very useful feature. I’m sure there will be bugs to patch, as well as enhancements to be built. We’ll find those over time, but I think that this is one of those features that we’ll come to see as essential in a decade and wonder how we ever built systems without it.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.1MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Work of the Ancients

I was reading a post from someone recently where they noted that they didn’t worry to much about the architecture of the system since it wouldn’t likely last very long. The poster had a comment that many systems are replaced inside of a few years.

In my experience, that’s not usually the case. In fact, while I don’t expect many applications I’ve worked on to last for 25 years, I suspect many of them will exist for ten years or more, especially if they are receiving regular development resources. With that in mind, I wanted to see how your databases are faring these days. I suspect a database might last longer than a particular application, as it seems most organizations are loathe to ever let data go.

What’s the age of your oldest, regularly used database?

I’m looking for an age in years. If the answer is less than one, I’m not sure I’d call that old at all. I am sure many of your  systems are older, and might have changed, but let us know the year when the system went into production.

I can tell you the SQLServerCentral systems are old in some ways, not so old in others. We’ve grown from one database to three over the years. The oldest database is circa 2003. Some of the data from that one was migrated to other databases around 2007. We’ve got data in the system since 2001, but we’ve certainly changed structures and storage over the years.

I’d guess that most of you that are working in companies that are older than ten years will have a database that’s at least that old. However let us know this week, and if you have any interesting notes, feel free to share them.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Data Masking for Convenience

I was at Microsoft in Redmond recently and heard an interesting comment from a SQL Server developer. I was debating the data masking feature with a friend, and we were torn on the value of this for various situations we’d each encountered in the past. There are some restrictions, and it doesn’t seem that data masking is really offering a lot of security.

The Microsoft developer, however, noted that this isn’t really a high security feature. It’s a developer feature. The design of data masking is to prevent that same code from being rewritten over and over by application developers. The use case is really to help with systems that might read some data, like those that print off part of an account number, ID number, credit card number, etc.

If you read up on the restrictions, this makes sense. If you are just trying to make development more convenient, the feature makes sense. I hadn’t thought about that use case, but the more I consider this, the more I’m sure that data masking does remove a bunch of code that developers might be re-implementing themselves, perhaps with highly variable levels of quality. It also removes the chance that application developers will accidentally pull sensitive data to a client and (poorly) implement mask replacement there.

I think this feature is being mis-marketed a bit, really to increase sales to executives and management. I’m sure there isn’t anything we can do about that, but I’d love to see technical documents and information about this for developers and DBAs. Give us a more realistic use case and give us better guidance. I think if we got that for many features, there might be more positive responses and great interest from technical professionals to the changes in the SQL Server platform.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn.

Trying Spoon

I ran into Kenji Obata of Spoon earlier this year and we ended up chatting about their technology. If you’ve never heard of them, they use container technology to allow applications to run on a Windows host, but separated from each other. The concept is similar to Docker, but Spoon is more client focused. They’ve expanded their offering to Turbo, which is a more client focused offering that I think some of you might be interested in.

I’ll talk about Spoon as the containers, but Turbo is the same technology and idea here.

As an example, you can run Spoon containers on your host machine that allow you to have IE7, IE8, and IE9 all at the same time. At the same time, connecting to the same website so you can see how things render. In fact, their Browser Sandbox has all sorts of choices, all of which can be completely contained and running together.

There’s a lot more you can do, but I wanted to get you started with a quick post on how you can use this with SQL Server.

A SQL Server SSMS Container

Browse over to and you’ll see a text box asking you what you might want to run. Type in SQL Server, and you’ll get some choices.

2015-09-21 11_37_06-New notification

I’ve got SQL Server 2014 on my laptop, but not SQL Server 2012. Imagine I want to see how SSMS 2012 might compare to SSMS 2014.

2015-09-21 11_48_11-Run desktop and server applications instantly, anywhere

I run the SSMS 2012 image. wants me to log in, and you can quickly create an account. They do charge for the service, but you can try it out for free. An account gives you the capability to save your images and restart them later with the saved settings and configuration for the future. You do need to download a small runtime if you haven’t done that in the past.

2015-09-21 11_39_10-Photos

I’ll see a small pane pop up on the website as the image downloads. This doesn’t require me to acknowledge anything, no UAC, no admin privileges.

2015-09-21 11_51_05-New notification

In a few minutes, I’ll get SSMS popping up. I can connect to my local 2014 instance and run the SSMS 2012 alongside the 2016CTP I have installed.

2015-09-21 11_54_06-About Microsoft SQL Server Management Studio

Many of you will say, Steve, you can run two instances of SSMS now. That’s true, but I have to install them both on my machine. This doesn’t require an install. It’s inside a container, that hasn’t affected my machine, other than to put a container file on my machine. I can see these from the command line:

2015-09-21 11_59_26-Photos

My SSMS container is running. If I close it, then the container will stop running.

2015-09-21 11_59_46-Photos

If I then restart it from the command line ( I could use the web), my local container file will restart.

2015-09-21 12_00_24-Photos

I could achieve some separation here from my local machine. With a subscription, I can save my image file, altering it to include plugins (like SQL Prompt), licenses, settings for my CMS or other servers, etc. When I restart, then I have a separate sandbox running SSMS.

Amazing Possibilities

I use Spoon/Turbo right now to run multiple instances of Chrome. I have 4 GMail accounts and I don’t want to sign in/out of each one, so I have each container configured with a different account, and I can work with all of my various accounts at the same time.

I could see this providing quick Express access to a private database in a container. I could have a container that connects to my system with privileged accounts, providing some separation from my host OS in case of a malware attack.

There’s a container with Chocolatey in there as well, so you can easily install whatever inside a container and save it off for later use.

I’m sure there are other possibilities, and I plan to keep experimenting to see what else I can do with the Spoon/turbo containers that will give me flexibility across machines, require less resources than VMs, and provide the separation from host machine dependencies.

T-SQL Tuesday #70 – The Enterprise

tsqltuesdayIt’s hard to believe this is the 70th edition of T-SQL Tuesday. I haven’t missed many, and I’ve enjoyed them all. I hope more of you are participating in this monthly blog party started by Adam Machanic (b / t). Whether you write today, or you write at some time in the future, T-SQL Tuesday topics are good ways to showcase your knowledge.

This month’s invitation comes from Jen McCown, of Midnight DBA. Jen asks us to write about managing an Enterprise of SQL Servers. That’s a good topic, since many of us struggle to manage multiple instances. Whether you have 5 or 500, there are some good ideas that you might implement in any environment, and I’m looking forward to reading what people have to say.

Decoupled Consistency

A long time ago I was thrust into the role of managing hundreds of SQL Server instances at a large company. I’d managed dozens of machines before, but this was a whole new level of scale. What’s more, the majority of machines were set up completely independently of each other, with no concerns other than a mandate to try and keep versions close to each other. That was a challenge in and of itself.

The only common tool in use was Patrol, which was mainly used to monitor performance counters, but we didn’t have the SQL Server specifics, so we really could only get general performance counters, and even those were difficult to access when thousands of hosts were being monitored.

I brought an idea with me from a previous job that had served me well with a handful of disparate instances. We’d consistently set up each instance, both installation, configuration, and monitoring, however we’d decouple each instance from others. Our goal was each machine capable of operating independently from all the others.

We had found that central servers go down, that we had no good way of tracking the status from machines when this happened, and most importantly, there are always exceptions.

With this in mind, we

  • built a procedure to install SQL Server, but included a number of scripts for post installation that would standardize settings. This allowed our server build people to easily handle SQL Server installation as part of their job. These days I’d use something like FineBuild to make this easy.
  • set up a DBA database on each instance that monitored jobs and other important status for the instance. If the instance was up and SQL Agent running, we’d know the status of that instance.
  • Performed basic monitoring of some key performance counters for a quick trend of the latest performance over the last week, similar to what SQL Monitor shows. Getting a quick snapshot was quicker than accessing central monitoring, especially in a crisis.
  • Assembled a report for the instance each day, calling out exceptions at the top, and leaving expected data below. We needed documentation for our ISO certification, and our auditors loved this.
  • We did use a central server to assemble the reports from all instances and compile them for the DBAs to review. All exceptions were at the top, and we used a left join to compare the list of instances with current reports. If any were missing, we bubbled that to the top as an exception.

These were all good, simple ideas that allowed a team of 3 DBAs to manage 400-500 instances of SQL Server. We were flexible enough to handle the needs of mission critical Finance instances as well as often changing development machines. Our data was simple and always available for us to give to clients when they had questions.

Most importantly, we built a system that allowed us to deal with exceptions, but not review the mundane stuff. Our time was precious, and that’s the case in any enterprise situation. You need to focus on the 10-20% of systems that really need regular attention while ensuring the other 80-90% of them are still being maintained.

How Many Times Will You Change a Password?

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

If you create a login and the user can’t log in, how many times will you change the password?

It turns out I’ll do it 5 times.

I was setting up a new installation of DLM Dashboard on a test machine. In the setup it asks for an account to run under. I dislike setting my own account (even for tests), so I flipped over to SSMS and added a new login, entered a password, unchecked “require change” and set this as a sysadmin. I clicked OK and returned to Chrome.

I entered the password and hit “Add”, only to get the “login failed” message for the user. Surely I mistyped something, so I typed the password again, with the same result.

Maybe I mistyped it in SSMS. Go back, change it to the same thing, adding a character in SSMS (let’s call this the first change) and then hit enter in Chrome.


Maybe I mistyped it. Go back to SSMS, change the password again (now twice), this time making it simpler. Uncheck the “policy check” and try again.


Hmmm. I’m confused. Let me type a password in Notepad. I’ll copy paste that in SSMS (now 3 times) and into Chrome.

Still a Failure.

At this point I’m confused. Why can’t a new user log in? I’m wracking my brain.

Maybe I have a sticky keyboard key? I’ll change the password again, this time to 5 of the same character (now 4 changes). I go slowly, typing the same 5 characters into Chrome.


What’s the cause? I’m starting to wonder if perhaps logins aren’t allowed on a protocol, and it hits me. SQL Authentication.

I go to the instance properties and I never allowed SQL Authentication when I installed SQL Server. After all, this is a test machine.

Change that and restart SQL Server. Change the password again (5 times) to a decent password that won’t be guessed if someone gets to this machine.

DLM Dashboard setup proceeds.


We all make mistakes. We do things wrong. Talk about how you learn and figure things out. This is a good story and lesson for me.


I should know better.

The Case for Upgrading

I heard from a member of the community that’s still on SQL Server 2005 recently. Yes, 2005. That’s an old, unsupported version, but they’ve got some Access/VBA apps that work well and aren’t worth investing a lot of development effort in. The person noted that they could only upgrade to 2008 R2 without rewriting code, and they didn’t want to move from one unsupported version to another.

I suspect a few of you are in this situation, though it seems many people I know that still run 2005 (or 2000), also have newer versions of SQL Server running. As there is the need, or opportunity, to replace and upgrade instances, they do so.

In this case, it seems there’s not good case to upgrade. A change to newer code would be expensive, but also licensing costs have dramatically risen. If the existing hardware were an older 4CPU (licensed), perhaps dual code x 8GB RAM system from, let’s say, 2008 ish hardware, that’s a quad core machine. While you might be able to run on fewer cores with today’s processors, you’d still be looking at a quad core, single CPU as a minimum for licensing and hardware.

There’s also the decision of upgrading to 2014 with new code. That could be a significant effort, with resources spent here instead of on new development. Is that worth it? I think it’s hard to decide without knowing more.

Is it worth it to upgrade to 2008 R2? I’m not sure. While you get some enhancements, you’re still moving to a version that will go out of mainstream support in a year or two. I’d say this isn’t worth the cost.

Ultimately I think Microsoft is shooting themselves in the foot a bit with the minimum core requirements and larger licensing costs, without giving customers flexibility. I think Microsoft would be better served by letting customers license the scale they need.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn.