Virtual Failback

I saw a very interesting tweet from Scott Stauffer asking this: “… looking for quick’n’easy way to make workload virtual w/ fail-fast method to physical if performance doesn’t meet expectation.” This was part of a conversation Scott was having in looking to move to a virtual environment from his current physical one. That’s something I think more and more people are being asked to do, whether they do in inside of their own data center, or they move to some type of hosted or cloud environment. There is increasing pressure from management to consider using cloud-type environments to reduce the capital expenditures for new systems and move to an operating cost model.

I don’t have a fundamental problem with cloud environments, though I think it is important to carefully consider the pros and cons, but I can certainly appreciate Scott’s concern. No matter how well we architect things or prepare for the movement of a physical environment to a virtual one, there could be problems. Having a fall back plan becomes important, and even more important if we discover problems when some time has passed.

While there are utilities that can move a physical machine to a virtual environment, there aren’t any (or any I know of) to reverse the process. Honestly, though, I think virtualization has so many advantages, that if I really had performance issues and needed to return to a physical host, I’d continue to virtualize my instance, but I’d have only one VM on my physical host, with access to almost all the resources on the hardware. Today’s hypervisors have so little overhead, I wouldn’t hesitate to run one virtual machine on a host.

Ultimately, moving to a virtual environment is very much like moving to new hardware. There are definitely different configuration options you may need to set, but you can contract for some help with configuring your system. In the worst case, just use a single VM on a host, get hardware abstraction, and manage the machine like any other. Just don’t forget to have the hypervisor and your guest start up automatically after a reboot.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Work of the Ancients

I was reading a post from someone recently where they noted that they didn’t worry to much about the architecture of the system since it wouldn’t likely last very long. The poster had a comment that many systems are replaced inside of a few years.

In my experience, that’s not usually the case. In fact, while I don’t expect many applications I’ve worked on to last for 25 years, I suspect many of them will exist for ten years or more, especially if they are receiving regular development resources. With that in mind, I wanted to see how your databases are faring these days. I suspect a database might last longer than a particular application, as it seems most organizations are loathe to ever let data go.

What’s the age of your oldest, regularly used database?

I’m looking for an age in years. If the answer is less than one, I’m not sure I’d call that old at all. I am sure many of your  systems are older, and might have changed, but let us know the year when the system went into production.

I can tell you the SQLServerCentral systems are old in some ways, not so old in others. We’ve grown from one database to three over the years. The oldest database is circa 2003. Some of the data from that one was migrated to other databases around 2007. We’ve got data in the system since 2001, but we’ve certainly changed structures and storage over the years.

I’d guess that most of you that are working in companies that are older than ten years will have a database that’s at least that old. However let us know this week, and if you have any interesting notes, feel free to share them.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Adjusting Model

In July the T-SQL Tuesday theme was defaults. There were lots of posts about various items, but I found one by Kevin Kline interesting. Kevin noted that the settings for model are not necessarily what many of us would want in production.

Is that really the case? This week I wanted to take a poll and see if many of you actually do create defaults for your new databases by changing model that are different than what Microsoft has done.

Do you change the model database?

Certainly there are some people that have create scripts with various settings for their databases. Some people prefer to customize the setup each time, based on the purpose of the database. Still others have scripts that alter the database to change settings after a someone has installed the instance.

There are any number of ways to ensure you have good settings for your production (or other) databases. Let us know this week if you have a step to change the model database when a new instance is installed.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.7MB) podcast or subscribe to the feed at iTunes and LibSyn.

Gauges and Alarms

I was looking through the news this week and found a piece by Michael J. Swart that I really enjoyed. It talks about different types of metrics that he has used when monitoring and alerting on his database servers. Michaal talks about the difference between alarms and gauges, which are usually base on the time scale in which a DBA or administrator needs to react to a situation. Alarms are created when something grows in a short term and causes an issue. The gauges tend to be longer term items that eventually cause a problem.

I’ve had a similar view in the past. However I look at these types of metrics from the standpoint of how I tackle my job as a DBA. I also try to think about these issues as a developer, ensuring that telemetry helps track issues that occur over time and acute issues cause a graceful degredation (or exit) and not an unexpected one.

When I’ve needed to manage systems, I ensure that I have alarms set up. I need these for the situations where something goes wrong in a short period of time. However I never want alarms to be based on a lack of visibility into the system over time. I want to try and eliminate as many alarms as possible with better long term monitoring that looks for indicators that the system might have issues. I’ve found that many (not all) of the alarms that I receive could have been prevented if I’d been paying close attention.

In other words, I’m a proactive DBA, looking at gauges (using MIchael’s terms), not a reactive one counting on alarms.

I can’t predict everything, but when I find that looking back at data is helpful. If I find a trend that could have alerted me, I’ll evolve my monitoring system to try and track those potential issues in the future. I’ve written my own custom metrics in the past, which track data, either PerfMon type counters or business data, and then let me know when the growth, decline, or change is unexpected. In this way, I tend to find I can often proactively prevent issues. If I can use my data to “predict” an issue my boss doesn’t believe in, I’ve also found my budget for resources may increase a bit, allowing me to manage systems even better in the future.

Steve Jones

My SQL Server Service Account Philosophy

Recently someone sent me a question about service accounts. They weren’t sure how they should go about setting accounts up for various instances and services in their environments. Specifically they asked me about having domain accounts, or accounts separate for services.

Note that I’ve managed SQL Server for years this way in environments up to hundreds of instances. I haven’t managed thousands, so there might be issues with this philosophy at scale.

Here’s how I view service accounts. In a short list, I try to manage things like this:

  • Domain accounts for the SQL database engine and SQL Agent
  • Separate accounts for all instances and all Agent services
  • Long, complex, one-time passwords that aren’t stored.

This has worked well for me, providing separation of services so that password changes or security issues on one instance don’t affect other instances.

It’s also been scalable in that I rarely setup SQL Server instances. In most organizations I’ve worked in, we are adding a few instances a week at the most. The overhead to create two new accounts per instance (db engine and Agent) is minimal.

Note that I would also have a separate domain account for SSAS or other items I install.

With today’s rapid provisioning of machines through virtualized environments, I realize this isn’t necessarily a good hard and fast rule. If I expect an instance to be a production level instance and live for some period of time in the organization, I’d follow this philosophy.

However if I am bringing online development and test instances that may not be kept around permanently, I think the local service accounts are fine. These will probably handle your needs and are worth scripting into your VM/instance creation process.

I’ll add a few more thoughts on this across other posts, but there’s my idea in a nutshell.

Does It Count?

This editorial was originally published on Jan 28, 2011. It is being re-run as Steve is out of town.

I wrote a blog recently about downtime and SLAs, and the need to have not only downtime SLAs, but also a data loss SLA. Thanks to Paul Randal (blog | @PaulRandal) for clueing me in to the need for having both. In the comments someone mentioned that scheduled downtime ought not be counted against your uptime measurements. That’s a debate I’ve had in the past with employers, and I thought it would be a good Friday poll:

Should scheduled maintenance be counted against your downtime SLA?

There’s an argument that says since the outage is scheduled, and people are informed, that this shouldn’t really count against your work in keeping servers running. The flip side is that when the database instance is down, it’s not useable, and of limited use to the business.

I know that companies that calculate the usage of their vehicles, and maintenance counts against usage time. While it’s necessary, the idea is that mechanics should be looking to minimize it, and perhaps ensure that extra checks are done when the vehicle is being worked on to help prevent future issues. A similar argument could be made for database servers.

Do you feel that scheduled maintenance is downtime? Is it calculated that way at your employer? Let us know this Friday.

Steve Jones

Using the DAC with SSMS

I’m writing this post as a way to help motivate the #SQLNewBloggers out there. Read the bottom for a few notes on structuring a post.

While troubleshooting another issue, I needed to connect to SQL Server with the DAC. I couldn’t remember the syntax, so I looked it up quickly and ran into this link: Diagnostic Connection for Database Administrators. I added the ADMIN: to my connection in SSMS for a query window and clicked Connect.


This took entirely too long and then I got this:


That error resulted in a rabbit trail for me to debug this, but it worked out and I’ve learned a few things. The main one is to be sure that I’ve read the documentation and errors correctly.

As I tried a few things, including SQLCMD, I realized I had issues. I searched and found that the DAC gets set to a specific port. I checked the error log to get the number.


With that, I added it to my connection dialog, sure that this would work, but of course, it didn’t.


Eventually I stumbled on the post about errors and realized I should have checked the SQL Browser earlier. I thought about it, but discarded that thought because I could connect in other ways. Mistake. Check networking first, and networking is where the browser comes in. Sure enough, it was stopped.


The properties were set to DISABLED, so I had to change that before I could start it. I assume you can do that, if not, poke around the properties until you find that setting. With that changed, I started the service.


Then I could connect. Here’s the SQLCMD version.


With this working, the first command dialog at the top of this piece worked from an SSMS query window.


This was longer. What I thought was a quick lookup turned into a troubleshooting exercise that lasted about 20 minutes as I searched, read, experimented, etc. At least I made a bunch of screenshots as I was experimenting, so I had lots of data and didn’t duplicate anything.

The actual writing was only about 10 minutes, most of that looking through screenshots and trying to organize my thoughts.


T-SQL Tuesday #67 – Extended Events for DBCC

tsqltuesdayIt’s the second Tuesday of the month and time for another T-SQL Tuesday. This time it’s #67 from Jes Borland on Extended Events. You can read the invite, and whether you can participate today or not, write your blog and leave a comment for Jes on her blog.

If you want to see all the topics from the past, I have them on a post here.

If you want to host, contact the founder of T-SQL Tuesday, Adam Machanic. He chooses the hosts and is always looking for people that want to participate. You just need a blog and a good idea.

Who Ran DBCC?

I ran across a question recently from someone that said DBCC messages had appeared in the error log, but no one had run DBCC. That’s not possible as DBCC isn’t run without someone, whether that’s a human or a service, executing the DBCC command.

In the past, I might recommend a trace to track this, but not I’d choose an Extended Events (XE) session. I’ll show how to set up a simple session, though be aware I’m not an XE guru. I’m sure there are better ways to structure this, but it worked well for me.

You can start quickly building a specific XE session by right clicking the Sessions folder under Extended Events in SSMS under Management. I tend to pick the wizard for getting started, though be sure to save the script later.

2015-06-08 11_36_10-SQLQuery2.sql - JOLLYGREENGIANT_SQL2012.Sandbox (JOLLYGREENGIANT_sjones (86))_ -

From here we get the opening screen for XE, which I won’t show. The next screen asks for a name for the session. It doesn’t matter, but once you start using XE, you’ll start to get quite a few session, so it’s handy to pick something that’s simple and easy to understand later.

2015-06-08 11_38_23-New Session Wizard_ Set Session Properties

I can choose to start the session here when I’m done, but don’t worry if you’re not sure. You’ll get a chance again later.

The next step is to decide whether you use a template or build a session from scratch. I’m not sure if any of these templates will work well as I haven’t dug in. In my case, I decided to just choose a generic session.

2015-06-08 11_41_07-New Session Wizard_ Choose Template

Now I need to pick the events. There are a lot, but I went down to get the sql_statement_completed as my event. This will let me see the text of a call, which is what I need.

2015-06-08 11_41_32-New Session Wizard_ Select Events To Capture

A couple notes here. One, you might want to grow this screen so that you can read what’s in the lower windows. I know there’s a lot of data here, but this screen doesn’t seem well defined.

The second note is that don’t forget to add the event with the arrow button. Until you have an event (or multiple events) in the right window, you can’t click "Next".

The next step is to get the data fields you need. Again, there are a lot, and I limited this to what I needed. I grabbed the client host and app, as well as the database. Then I grabbed the service_principal_name and sql_text as well. This should let me determine who’s running the command.

2015-06-08 11_45_29-New Session Wizard_ Capture Global Fields

I’ve got lots of data here, but I only care about the execution of dbcc checkdb. This means I need to filter things, which is my choice with the next screen.


Here I need to click in the top line to get a place to add a filter. Then I select sql_text from the drop down and choose like for the operator and "dbcc checkdb" for the value. This should grab events that execute with a dbcc call.

2015-06-08 11_49_19-New Session Wizard_ Specify Session Data Storage

I have to choose where to store this data. Since this is mostly a check for me, I can just use the ring buffer and keep data in memory. I lowered this to 10 events, but you can certainly pick what works for you. If you care about auditing, drop this in a file. For a small session like this, there’s not much data that’s going to be captured.

I get a summary screen, which lets me review settings. I haven’t shown it here, but once I pass that, I get the "Success" screen. This is where I can start the session, and actually watch live data if I want.

2015-06-08 11_51_20-New Session Wizard_ Create Event Session

For this, I’ll watch the live data.

2015-06-08 11_52_20-JOLLYGREENGIANT_SQL2012 - dbcc checkdb_ Live Data - Microsoft SQL Server Managem

I decided to test this first and see if it captures anything else. I ran a few queries, and then this:

2015-06-08 11_52_30-SQLQuery3.sql - JOLLYGREENGIANT_SQL2012.AdventureWorks2012 (JOLLYGREENGIANT_sjon

Still a blank, live data screen. Then I ran checkdb. I saw the results, and then ran it a few more times.

2015-06-08 12_23_59-JOLLYGREENGIANT_SQL2012 - DBCC execution_ Live Data - Microsoft SQL Server Manag

As you can see, I now can see that I ran checkdb against the EncryptionPrimer database.

This lets me do some quick auditing of what’s happening with my DBCCs. A smart DBA might even use this to ensure that DBCC is being run against some databases, perhaps on a spare machine.

DB_Owner Querying for Database Options

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

What can a user with the db_owner database role do? I assumed anything inside of the database (create/alter objects, assign permissions, back up the database, etc). However someone asked recently about whether someone could read database properties. I’d assume they can, but I needed to check.

I decided to start by creating a new login. I have lots on my test instance, but I went with just building a new one. I used the GUI to add [DBOwnerTest] because it’s quicker. I assigned a password, set a default database, and mapped this user to db_owner in a database.


I then opened a query window and changed the connection:


Now I could easily run a query for properties and see the results:


Note the connection at the bottom of the image above.

Here’s where it’s a little interesting. I disconnected Object Explorer and connected back as DBOwnerTest. I see this:


Looks normal. According to the BOL documentation for permissions, db_owner gets View Any Database as well as control over their own database. I can see the properties of the Sandbox database (where I’m db_owner).


But I can’t see properties of other databases.


I also see my login, but not the couple dozen others I have, other than sa. That’s curious, and perhaps not good. However this isn’t the place to delve into that.


I have the ability to query through databasepropertyex(), which I’ve documented in another post.

Certainly db_owner conveys lots of rights to the user, and certainly the ability to see some of the outside of the database container, such as the options and properties, as well as other databases.


This was based on a question I saw posted at SQLServerCentral. It took me about 5 minutes to set up a test login and query for information. I had to perform a few searches and try some queries. I spent a few minutes researching databasepropertyex(), which became another post.

All told, this was about a 15 minute post.

You can do this. Join the #SQLNewBlogger group and start documenting your career. You can see all my posts that fall into this area by looking through the SQLNewBlogger tag here.


Permissions of Fixed Database Roles –

Naming Confusion

When I first worked in a corporation that had many servers, I had to learn the encoding of our systems. We had SUIS01, SUIS02, NAIS01, NAOP01, etc. Each name designated a location and department, but no information as to what applications were running. That was a separate thing to learn, which thankfully didn’t change often as procuring and setting up a new system was a relatively rare event.

However I also ran into a few systems named “Dorothy” and “Wiz”. The other administrators griped constantly about the unprofessionalism of those names. I’m not sure I think SUIS01 is better or worse than Dorothy, but that’s a separate debate. In both cases, we had to memorize what applications were actually running on which systems. Since all of them were managed by IS, regardless of the department that owned them, we were responsible for understanding the purpose of the server.

I ran into a situation recently where a company had a server called SalesSQL, which was there production server. The development server was SalesSQL_Dev, which is fine until you see someone opening SSMS, thinking they’ve connected to the right server since they see “SalesSQL” in the connection dialog and starting to execute code. Granted, even a rookie should be able to notice the “_Dev” (or a similar “Dev_SalesSQL” name), but mistakes happen.

I wonder sometimes if naming systems too closely to similar systems in different environments is a good or bad idea. I especially get concerned if shortcuts are taken with database names like “Sales” and “Salesqa”. I think it can be easy to confuse those systems when you become stressed or are in a hurry and are glancing at a small drop down box.

Ultimately I’m not sure there’s a good method for naming systems that prevents confusion, and ensures people are working in the correct environment. I have tended to use suffixes in my career to denote development and test environments as I can more easily differentiate a dev environment, but I wonder if I shouldn’t be using prefixes instead. Having to retype the beginning of a name (or delete the “dev”) in the connection dialogs might force me to do extra work to connect to a production system. That little extra bit of effort might be just the thing to reduce the chance of performing development work on a live system.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn.