Virtual Failback

I saw a very interesting tweet from Scott Stauffer asking this: “… looking for quick’n’easy way to make workload virtual w/ fail-fast method to physical if performance doesn’t meet expectation.” This was part of a conversation Scott was having in looking to move to a virtual environment from his current physical one. That’s something I think more and more people are being asked to do, whether they do in inside of their own data center, or they move to some type of hosted or cloud environment. There is increasing pressure from management to consider using cloud-type environments to reduce the capital expenditures for new systems and move to an operating cost model.

I don’t have a fundamental problem with cloud environments, though I think it is important to carefully consider the pros and cons, but I can certainly appreciate Scott’s concern. No matter how well we architect things or prepare for the movement of a physical environment to a virtual one, there could be problems. Having a fall back plan becomes important, and even more important if we discover problems when some time has passed.

While there are utilities that can move a physical machine to a virtual environment, there aren’t any (or any I know of) to reverse the process. Honestly, though, I think virtualization has so many advantages, that if I really had performance issues and needed to return to a physical host, I’d continue to virtualize my instance, but I’d have only one VM on my physical host, with access to almost all the resources on the hardware. Today’s hypervisors have so little overhead, I wouldn’t hesitate to run one virtual machine on a host.

Ultimately, moving to a virtual environment is very much like moving to new hardware. There are definitely different configuration options you may need to set, but you can contract for some help with configuring your system. In the worst case, just use a single VM on a host, get hardware abstraction, and manage the machine like any other. Just don’t forget to have the hypervisor and your guest start up automatically after a reboot.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

VMs are not VMs

I was at VMWare recently. One of the main things that all of the SQL Server professionals that were there tried to emphasize is that SQL Server workloads are not like other workloads. The impact on the various host resources, the stress on the storage systems, these are fundamentally different in a database server. The loads tend to be higher, but not always, however the tolerance for delays tends to be lower than for many other types of applications.

This becomes an issue if you work in an organization that doesn’t understand the challenges of database systems. It’s entirely possible that your virtualization administators, or your storage administrators don’t recognize that the SQL Server might need more resources. Or they don’t believe the impact is greater for the organization. To be fair, that might be true, but someone other than the DBA or system administrator should decide if the database is more important than the file server and should be treated differently from an infrastructure perspective.

No matter what level of resources your database server need, it’s not going to run like other systems. Typically this means that the density of VMs has to change when a database server is involved. As an example, I know of a system that typically has a 10:1 guest:host ratio for most of their server systems. However for SQL Servers it’s 4:1 or lower. The same is true for storage. Aggregate bandwidth doesn’t always reflect the ability of a storage system to keep up with database requests. It becomes important that both you and your storage administrators learn to speak the same language and understand what requirements exist for SQL Server VMs.

Virtualization really starts to highlight the advantages of a DevOps environment. DBAs and developers should work closely with the virtualization and storage administrators to learn what each others’ requirements are and how each of us can help the other perform their particular job at a higher level. Infrastructure staff can help prepare standard environments and ensure production looks like staging. Developers and DBAs can help a vSphere admin learn a little PowerCLI and programming. That might get them to be more cognizant of the particular requirements of your SQL Server and be more willing to work with you.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and LibSyn.


How Virtualized?

I went to a talk recently where I saw this statistic: “50% of all workloads were virtualized in 2009. That number is 72% today.”

That’s a really big number, at least in my mind. That implies the vast majority of all servers, file, print, database, email, etc. are virtualized. Inside of companies that have their own data centers and machines, they must be heavily virtualized. I’m sure that all those instances in the “cloud” also count, but still, 72%? That’s big.

However I’m sure that’s skewed towards those machines that don’t require a lot of resources, like file and print servers, DNS hosts, etc. This week, I thought I’d see what the percentage is inside of your organization.

What percentage of your SQL Servers are virtualized?

Give us numbers of physical v virtual if you can. I’d combine all instances, from development to test to production, not worrying about size or workload. If you have a single guest on a host, using almost all the resources, that’s a virtual server.

My suspicion is that the percentage of SQL Servers is much lower than that of other workloads, but I’m curious. With the low overhead of modern hypervisors, and the free (or low) cost, it makes sense to virtualize servers. If for no other reason than to remove any weird hardware dependencies for DR purposes. However I’m sure that there are large workloads that require more resources than the current hypervisors can expose, at least for some database instances, and those need to remain on physical machines, but my guess is more often than not, it’s the human concerns or lack of confidence that prevents virtualization.

Let us know this week how your organization is doing in the trend towards virtual servers.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

VMware and DBAs

Disclosure: I’m a VMware fan and run it on my laptop. I definitely prefer it to other hypervisors. This workshop and all travel expenses was also paid for entirely by VMware.

I was fortunate and grateful to be invited out to VMware recently for a multi-day workshop for SQL Server on VMware. Michael Corey wrote about the workshop here, which was an interesting way for VMware to reach out to the SQL Community and both talk to us about what they’re doing as well as get feedback from us on how SQL Server performs and interacts with the VMware platform. They will do more in the future, so if you’re active in the community or a heavy user, read Michael’s post and see if you can get yourself nominated.

The week was full of speakers from both VMware and Tintri, a storage vendor doing interesting things. We had quite a few executives talking about their plans to make SQL Server run better, and we (as a group) provided lots of feedback. Certainly I think we helped to ensure executives understand that a database server is fundamentally different from a file server, mail server, or other workload.

Of course, it wasn’t all work.


I can’t talk about lots of what we discussed, as much is under NDA, but I will say that you should keep an eye on how VMware is going to improve their interaction with SQL Server. Hopefully we’ll see more products, information, and help at some of the SQL Saturdays and other SQL Server specific events.

We also got a night out at ATT Park in San Francisco where the various attendees and speakers could interact in a casual atmosphere. An exciting game, with two of the best pitchers in the game throwing that night (Kershaw and Bumgarner).

Photo Apr 22, 8 19 09 PM

I’d never been there, so it was a treat for me. I walked around and enjoyed the baseball game, the first one I’ve seen live in a couple years.

Photo Apr 22, 8 26 19 PM

Overall it was a very interesting time, with lots of points raised by everyone that I hadn’t considered or thought of. The more I hear about what the ESX platform does and handles, as well as hearing about how people have set up the platform. There are truly some powerful instances being run in a virtualized environment and I’m not sure there are many workloads that couldn’t be run successfully on VMware.

I’m sure that there are hardware and budget restrictions for many people, but the issues aren’t VMware or virtualization, they’re the setup. In the labs we worked on, quite a few of the systems pressed a SQL Server instance hard, and the hypervisor and storage system kept up nicely.

Photo Apr 22, 10 27 32 AM

Not to disparage Hyper-V, but I haven’t had the experience there. I suspect both hypervisors could be tuned to a high level. Both also have some holes and places where the system might not run as smoothly as you’d like, but at least I think VMware is more aware of what we, as SQL Server professionals, see as problems.


I have to admit that I have mostly though of storage as a utility. It needs to work and respond fast, but beyond that, I don’t care about it. It’s like a water faucet. I turn it on and it works.

Tintri was the guest storage vendor that provided appliances for us to use. I didn’t’ think much of them on Tuesday morning, but by the time their founder had talked with us, I was very intrigued. I still don’t really care about storage other than needing it to work, but I was very intrigued by how this particular product works.

Most storage is presented as a LUN to a host, which may or may not share that among guests running on a box. Tintri changes that, with what they call VM aware storage, with VM level QOS. Essentially, the storage box is aware of each guest connected as a VM. It manages the storage response and bandwidth on a VM basis, with separate FIFO queues for each drive in the VM. Essentially each VM is treated separately and the storage capabilities (min and max IOPS) can be managed separately.

It seems as though VMs get better response from the appliance this way, with less dependency on heavy management by storage admins. They also use a lot of flash memory (SSDs) with disks to ensure fast responses. They do 100% write to flash for speed and tune the systems to aim for 99% reads from flash. Because the device is aware of VMs, it can move blocks around from flash to disk to ensure that the heavily used data is quickly available.

If you have heavy needs, check them out. If you’re looking for a SAN, I’d look at them as well. Not sure what the pricing is, but I bet it’s competitive with other SAN devices.

We also got a lecture about low level storage technologies from a VMware exec. The talk looked at the advances taking place in storage, which seem to be finally leaping forward. With the price of SSDs crashing, and the research into 3D flash, it seems that though more and more of us might move to flash based SAN storage quicker than we expect. Fascinating stuff, and I suspect that database systems will start to see better performance from hardware upgrades across the next 6-7 years.

A Break

It was nice to get away and learn something without any pressure to work on much else. This was a short workshop, close to home, and in an area that had me concentrating and paying attention in ways that sometimes don’t happen at SQL Server events. I enjoyed it, and I learned quite a few things.

Photo Apr 22, 10 27 27 AM

I’m not sure how much I’ll use this stuff moving forward, but I am thinking that I’d like to play around with an ESX server at home and use that to experiment with SQL Server features and especially setup. I’m looking forward to trying to get a system ready that can build a new SQL Server instance in minutes.

Virtual Lab – New Domain User

This is part of my series on building a virtual lab for use with SQL Server and Windows. You can see the entire series here: Building a Virtual Lab with Hyper-V.

After the domain was up, I needed to add users. Specifically, I didn’t want to use administrator for all actions, since that bothers me. It just seems like a poor practice. I also needed service accounts. The accounts I needed:

  • sjones
  • Broncos SQL – for this SQL Server
  • Nuggets SQL – for this SQL Server
  • Rockies SQL – for this SQL Server
  • Joe – my test SQL account, without sa rights.

I’ll probably need more, but these are good for now.

Domain Users

I used the script in this post, in a variation, at the command line. I didn’t need all the fields, so this is what I used.

New-AdUser -SamAccountName "BroncosSQL" -Name "Broncos SQL" -Enabled $true -ChangePasswordAtLogon $false -PasswordNeverExpires $true -AccountPassword (ConvertTo-SecureString "MyPassword" -AsPlainText -Force)

Note: That wasn’t the password I used. I used a complex, 12 character, upper/lower case, numbers, etc. password.

I repeated this for all the users.

Domain Groups

For the most part, I don’t need, or want, to assign extra rights for these accounts. The SQL Server setup will assign local rights, and I’ll modify if needed. However I do need to grant domain admin rights to my main account to log on and run the domain at times.

I went back to basics, with TechNet documentation. I need the Add-ADGroupMember cmdlet to add someone. However, I also need the groups. I searched, and Spiceworks shows up again. I ran this:

Get-ADGroup -filter * -properties GroupCategory | ft name,groupcategory

and got this list:


I want to add sjones to the Domain Admins group. Using the Add-ADGroupMember, I ran this:

Add-ADGroupMember "Domain Admins" sjones

And it worked. I could easily log on and administer other machines with this account.

Virtual Lab – The Domain

This is part of my series on building a virtual lab for use with SQL Server and Windows. You can see the entire series here: Building a Virtual Lab with Hyper-V.

The big thing in setting up a domain is to enable you to connect multiple machines together, experiment with things like Powershell Remoting, AlwaysOn, etc.

There are a couple things you need to do here. The first is to install a domain controller on one of the VMs, and then you need to join the remaining computers to the domain. This isn’t that hard, and I’ll show you two ways to do this: the GUI and PoSh.

Create a Domain Controller

I followed instructions to build a domain from TheSQLPro, since that was the first, and simplest instruction I had. I connected to my ServerCore installation named DenverDC and ran this:

Install-windowsfeature -name AD-Domain-Services


Install-ADDSForest –DomainName “SSCLAB.LOCAL” -DomainMode Win2012 -DomainNetbiosName “SSCLAB” -ForestMode Win2012

I entered both of these from Powershell and restarted the VM.  I then ran a


and as you can see, I have a domain set up on this machine.


Joining the Domain from the GUI

The first step is to be sure that you have connectivity between your machine and the DC. I had to ensure I could ping back and forth, both by IP and computer name. I also made sure to set my DNS to the domain controller. In my case, this was the DenverDC at

Once I was fairly sure I had networking down, I went to the control panel on one of the machines. I went to the computer properties and clicked the "Change Settings" link.


From there, I had the basic properties. As you can see below, I was in a workgroup. The first thing to do is click the "change" button.


Once that’s done, you have the workgroup/domain set of radio buttons. I clicked the domain item and entered the name of my domain.


You get a credential box where you need to enter credentials. I believe these are the DC level credentials. For this lab, I have the domain and local administrators all using the same user/password (Administrator/mypassword) and entered that.


If networking is working, it should take a minute and then you’ll get this:


As soon as you click OK, you’ll get told this requires a reboot. It does, so restart.


Once you restart, if you go back, you should see that you are in the domain in the computer properties.


Joining from Powershell

I found this cmdlet that worked for me.

Add-Computer -DomainName "SSCLab.Local"

Once I typed this in, I got a dialog box asking me to enter the administrator credentials. I did that and it worked. I had to reboot with a restart-computer.


And we’re working



I did get an error on one of my VMs. It was error 0x21C4 on a Server Core installation. When I looked that up, I got a duplicate SID error. I had sysprep’d the machines, but perhaps I broke something. In any case, I re-ran sysprep, reset the network config, renamed the computer, and then joined the domain as noted above.

Virtual Lab – Adapter Setup

This is part of a series where I set up a virtual lab for testing and misc. work. The other parts in the series are here: Building a Virtual Lab with Hyper-V.

Once I had the machine up and running, I knew I needed to get the networking setup. One of the things I’ll do is do some clustering tests, and for that, I need to have static IP addresses. I’m an older, IPv4 guy, so that’s what I’ll use here.

I decided to put all my machines in the 192.168.1.x space. I’ll use these addresses:

  • DenverDC –
  • Broncos –
  • Nuggets –
  • Rockies –
  • Avalanche –

I’ll deal with the client machine when I get there. For now this is what I need to worry about.

The machines are set up and passwords changed. I now need to start them and get networking configured. I googled and found this TechNet article on using PowerShell to configure a NIC. There’s also the Configure a Core Server. I know you can use sconfig to do this easily, but I wanted to see how hard it is in PoSh. In the Standard edition, it’s easy to use the GUI as well.

First I needed to know what adapters I have. I ran


This told me my main adapter was “Ethernet 2”. So I ran this:

$netadapter = Get-NetAdapter -Name “Ethernet 2”

The first step is to remove DHCP. You’d do this by changing a radio button on the adapter settings. In this case, we do it with PowerShell.

$netadapter | Set-NetIPInterface -DHCP Disabled

Next we want to set up our IP address. In my case, I’m going to use the 10.10.10 address space.

$netadapter | New-NetIPAddress -AddressFamily IPv4 -IPAddress -PrefixLength 24 -Type Unicast -DefaultGateway

Once that is done, we can then look at DNS. In this case, I’m going to point it to my gateway, which doesn’t really resolve to anything (yet).

Set-DnsClientServerAddress -InterfaceAlias “Ethernet 2” -ServerAddresses

I repeat this for all my servers, getting them all set up with their proper IP addresses. Once I’m done, I have 5 servers running with the IPs above.

However none of them can ping each other. That’s strange, but not unexpected. The mindset to increase security by default is likely to blame. I don’t know what the exploits that can come through ping (DOS I guess), but I know more and more companies avoid allowing ping responses.

Turn off the firewall

I decide that I need to turn off the firewall to check. Since I have 2 Standard installations and 4 Core installations, I go to the Standard ones first and use the GUI to kill the firewall for my networks. It was at this point that I realized that by default my connections saw the network as public connections, not private.

I turn off the public connection firewall and pings work from one of the Core servers. Then I turn that on and disable the private firewall. Pings fail.

Now I know what to do. First, I use a security change in the GUI to set my Server with the Local Security Policy app in Windows. Once this is done, I set things to private, disable that firewall and verify pings work. I know this works, and now I’m ready to change the other servers.

I find a script on MSDN Blogs that shows me how to do this in PoSh. It’s a strange script, and it doesn’t give any results, but it seemed to work.

$networkListManager = [Activator]::CreateInstance([Type]::GetTypeFromCLSID([Guid]”{DCB00C01-570F-4A9B-8D69-199FDBA5723B}”))
$connections = $networkListManager.GetNetworkConnections()
# Set network location to Private for all networks
$connections | % {$_.GetNetwork().SetCategory(1)}

Once I ran this, I then needed to turn off the firewall. I found this link and then ran this command.

netsh advfirewall set private state off


That worked, and then you can see my ping worked.


The top image above is from the machine I was working on. The bottom one shows the ping failing from my SQL machine to the DC, and then working once I’d disabled the firewall for the private network.

Update: I originally wanted to work in the 10.x.x.x space, but I kept confusing myself, so I moved all the machines to the 192.168.1.x network.

Rinse, repeat for all machines. Eventually I have every machine pinging every other machine and able to connect.

Networking working.

VMs for Development

Here’s the scenario: you have gotten a few consulting jobs and have a couple clients. This could be your full time employment, or a side job that you perform away from another employer. You want to watch your budget, ensure you can work efficiently, and handle whatever requirements your clients may send your way. This week’s question is:

How do you set up your virtual environment at home?

I think the idea of using Virtual Machines (VM) is a given these days, but do you have one VM with all your tools in it? Do you use separate VMs for each client? What about licensing? Those can be complex questions for many people, especially if your employer does not provide you with multiple license keys.

I hope that you are using VMs as having multiple computers isn’t practical these days, especially as the cost of power rises. A relatively small, inexpensive desktop computer can run 5, or even 10, VMs to simulate a variety of environments. I’ve seen some creative uses of hypervisors and other software to simulate clusters, SANs, and even multiple domains on one host.

Let us know this week if you have some good tricks that can help someone get started with virtual machines, building a lab, choosing hardware, or easily configuring networking for their own learning efforts. Tell us what software you use, and if you don’t mind sharing cost data, I’m sure others would appreciate the information.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.9MB) podcast or subscribe to the feed at iTunes and Mevio . feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

Virtual Benchmarks

The consolidated server setup for the TPC-VMS
The consolidated server setup for the TPC-VMS

I noticed this week that the Transaction Processing Council (TPC) is working on a new benchmark designed to measure workloads across virtual machines: TPC-VMS. This benchmark builds on the existing benchmarks out there (TPC-C, TPC-E, TPC-H and TPC-DS) with the idea that companies want some idea of how various hardware and software might compare in virtual environments. In this benchmark, there are three systems consolidated onto one host running some type of hypervisor. If you are interested, you can read about the current benchmark, v1.1 (1.9MB PDF).

Companies choose virtualization for efficiency reasons, sacrificing some stable, known level of performance from their systems. In many cases that trade-off isn’t a problem as we have many, many systems that are using only a fraction of their power. However as data professionals, we are often very concerned about the possible complications from virtualization. It’s our phones that ring and each of us that gets the blame when systems are not as responsive as users would like.
This seems to be an ambitious undertaking from the TPC, and I suspect that more than a few hardware and software vendors will be nervous about submitting their wares for evaluation. The very nature of virtualization would seem to imply that as the load increases, the performance from any particular VM might vary from test to test. I will be curious to see how they present these results and how we can interpret them.
I don’t know if the benchmarks will have any relation to the real world. As it stands today, the TPC results don’t seem to relate to actual systems in the real world, though they do confer some bragging rights for platforms. I’ve enjoyed seeing SQL Server in the various rankings, if for no other reason than to show it can perform at the same level as other RDBMS’s.

Windows Server 2012 and Hyper-V

Hyper-V looks like a great candidate for almost any SQL Server with the enhancements in Windows Server 2012.

I recently went to a Microsoft event in Denver on Windows Server 2012 and Hyper-V improvements. A bunch of the information was presented by Harold Wong (b | t) and there’s a number of demos and notes from the talks on his blog.

I haven’t looked much at the Windows server OS’s in years and not much at Hyper-V. I have preferred VMWare for my demo/research environments, especially as I move between Windows and OSX regularly. However I’ve thought Hyper-V was rapidly improving and on the right track. I was surprised to find the new limits in Hyper-V under Windows Server 2012 to be quite high for both the host OS and the guests. You can have up to

  • 64 virtual processors
  • 1TB RAM
  • 64TB (vhdx format)
  • 4 virtual Fibre Channel adapters
  • much more

With 320 logical processors and 4TB of ram on the host, it seems as though Hyper-V is on par with VmWare ESX 5. There’s a lot more to look at than software cost, but at this time, it appears all new virtualization projects using Windows ought to consider Hyper-V.

There were interesting demos on replicas, live migration, improvements in file transfers and more. They were designed to make things look good, and there’s a good marketing presentation on the capabilities. I’m sure the actual implementation isn’t as easy or smooth as in the talks, but it did make me think there’s no reason virtualization shouldn’t be considered for SQL Servers, especially as you move to newer hardware.

Steve Jones

The Voice of the DBA Podcasts

We publish three versions of the podcast each day for you to enjoy.