#2 Skill – Performing a Restore

This series of blog posts are related to my presentation, The Top Ten Skills You Need, which is scheduled for a few deliveries in 2011.

You Will Restore a Database

At some point you’ll need to restore a database. It might be a database on your local instance of SQL Server to correct a problem with a query or a patch, but you’ll need to restore data.

This goes along with the first skill of backing up a database as the counterpart. A backup saves the data (and objects) and a restore brings that data back.

Restores are fairly easy, but there are a couple of things you need to learn right away:


By default the RESTORE command brings a database online by going through the recovery (redo and undo) processes. For a full database restore, this means you cannot restore additional logs. You might not to this time, but at some point you will.

So always use WITH NORECOVERY.

You need this in database mirroring, log shipping, and more scenarios. Always include this in your restore commands. To bring the database online when you are sure you are done restoring (even if this is only one restore), use the RESTORE command and WITH RECOVERY, as in:


Learn to move files

I find that many restores take place for practice, or on servers other than the original ones. In that case, the paths might not exist. Often the production servers, or the main servers you use, will have more drives than the test servers. In that case, having a file stored on the z: drive for a server doesn’t match up with a development server containing only a C: drive.

The WITH MOVE option is used to move the existing logical files in your restore to a new location. Here is the sample command from Books Online.

Script Restores

It’s easy to make mistakes with the GUI in SSMS. Learn to script restores and run the scripts. Even if you use SSMS to setup the restore, don’t click OK. Instead click this:


64 Bit

Today we have a reprint of an editorial from Nov 25, 2005 as Steve is on vacation.

No news yet on SQL Server, but apparently the next version of Exchange will not have 32-bit support. The reason appears to be the improvements in disk I/O because of larger caches, so Microsoft if warning users to validate, test, and prepare for Windows 2003 64-bit.

I guess is makes sense, but if the improvements are really good because of caching, why not push harder on SQL Server? I’d think that it would benefit greatly as well. The demos I’ve seen definitely show improvement in performance, but they aren’t really pushed.

Maybe the reason is that so many SQL Servers are developer and personal editions on desktops and no one wants deployment issues moving between versions. I know there shouldn’t be any, but how many developers have told you there shouldn’t be any problems? After they’ve deployed something?

I think SQL Server will see more and more 64-bit installations, but they’ll be the minority of installations. Not too many people are playing with it, at least not that I’ve heard. I’d love to get some 64-bit servers for SQLServerCentral.com and upgrade to 2005, 64-bit just to see how it runs. However the investment just for kicks isn’t worth it as we’re not stressing out SQL 2K installation at this time.

However, if anyone has any pull with Dell, HP, or even Unisys, we’re looking for a site sponsor to supply a bit of hardware. I’ve got all servers from one vendor, but I’d definitely be looking to switch for the right offer

Shrinking Databases

Every time you shrink a database, an angel gets its wings torn off. Or Paul Randal feels a disturbance in The Force. In other words, don’t do it.

But people continually do shrink databases. Despite the constant advice and guidance from Microsoft, MVPs, and more, the fact that shrink is available easily in maintenance plans and with a DBCC command, people do it. I understand it since it’s natural to not waste space and so many other tools, like Access and Outlook, have files that only use the spaced needed.  Why not SQL Server?

It’s Friday, and I thought I might see if there might be a oslution tht makes sense. For this poll, I want to ask you about shrink:

Should shrink be removed or fixed?

By removed, I don’t mean completely take it out of SQL Server, but make it harder. Maybe require a trace flag, maybe something else that might reduce the regularity with which it’s run. I’d certainly recommend it be completely removed from maintenance plans, both the wizard and the designer.

Is there a better solution, however? Should perhaps shrink be fixed to be a more intelligent operation that doesn’t cause lots of fragmentation? Even if it’s slower, or maybe an operation that requires more resources to complete, would it be better to actually “fix” shrink?

Put your answers in the discussion and let us know what you think

Steve Jones

Certified Private Clouds

Cloud computing is becoming more and more popular, or at least, more and more talked about in the tech press. Like many of you, I am not sure that I want to run my database servers from the cloud, but that might change. I ran across a note that said Microsoft is looking to certify private cloud stacks with a number of vendors, similar to the Fast Track program that they implemented for data warehousing.

For some time now I’ve felt that to really get large scale adoption of the Azure cloud framework, Microsoft really needed to have a way for corporate IT departments to build and deploy to private, internal “cloud services.” Clouds they could build, expand, and most importantly, secure, according to their own requirements, on their own premises. The idea of trusting a central provider seems foreign to many companies, especially when governments could subpoena your information or the cloud provider could shut down your business. To what extent you are more protected on premises is unclear, but many companies do no want to trust a third party to safeguard their rights.

The idea of a service that you can deploy to, connect to, and use to abstract your applications away from specific machines is one that makes sense. We constantly build abstraction layers to simplify the interconnections of our systems. The “cloud”, whether public or private, is another extension of this. I know I’d like to be able to manage one large SQL Server cloud, rather than individual machines.

Actually, it seems like  a lot of the work I’ve done over the years is to try and make all my instances appear as one single group that I can manage together. Kind of my own type of “cloud.”

Steve Jones


We are always improving our computer hardware. It used to be that our CPUs increased their clock speed regularly. Then we started to add additional cores. Along with large increases of memory and disk technology, there has been this quest to lower the response time of our systems. However many of our computers don’t seem to actually be moving quicker.

Are our database systems responding quicker? We have better search and join algorithms, hardware that transfers data quicker, and more knowledge about how to index our tables. I know all that information might not always be used in many systems, but we can strive to use the DMVs and DMFs to learn more about our databases and tune them better.

But do they respond quicker? In many cases they don’t, and part of this is the continual growth of data. It seems we accumulate more and more data, at faster rates each year. There is also the ever-increasing level of overhead we seem to add to each piece of software we use. The bloat of software, often with frameworks designed to make the developer’s job easier, result in systems that aren’t any more responsive than they have been in the past.

I do wish software responded quicker, but I also love the richness of so many applications and web sites that I see. I think overall we’ve improved the experience of working with software by adding new features and capabilities over the years, especially in SQL Server.

Steve Jones

The Need for New Technology

Do we need specialized technology for the large scale analysis of data? I read a report (registration required) that talks about the need for specialized technologies to handle the needs and requirements of complex data analysis. As often is the case, this report is written by someone that isn’t very technical, and is likely using information put out by the companies who are building these specialized technologies. However that doesn’t change the fact that the premise if valid.

Just recently the Parallel Data Warehouse edition of SQL Server was released, providing a specialized, and extremely powerful platform for large scale data warehouses. The release of Powerpivot as well as all the BI enhancements added to SQL Server 2008 R2 mean that you have a diverse platform that isn’t just built as a “transaction stack” processor.

As technical people should we care if about articles like this? Absolutely we should, since these articles often influence executives who are the ones that might end up signing a purchase order for some other technology.  If management is going to be spending more of the technology budget in your  area, don’t you want to try and influence them to spend the money wisely? That means you ought to understand what is being written and be able to intelligently talk about the choices.

SQL Server isn’t necessarily the best solution for all environments, and it’s not necessarily even the platform on which you want to build your BI system. However SQL Server has an amazing array of capabilities, and it is a product you should consider.

Steve Jones

Help with Performance

One of the things that so many people working with SQL Server struggle to do is identify what is happening on their servers. Despite all the DMVs, DMFs, and other tools available, I regularly see questions posted about the trouble people have with digging into SQL Server and finding out what is wrong. There is so much information available that often people are not sure where to start looking.

In the past few weeks I noticed two great posts that should help you to better understand what is happening on your SQL Server. One is about the release of “Who Is Active v 10” from Adam Machanic, which is a great tool. The other is a series of queries from Glenn Berry that give you a lot of diagnostic information about your instance.  Both of these are tools that you should add to your toolbox and learn to use.

In addition to having tools to dig into what might be going on right now in your instance, monitoring your server instances is also critical for being able to determine what kinds of problems you have. After all, if you don’t know what’s normal, how can you tell what’s not normal? Many of the consultants I know have to spend hours at new clients getting a feel for the environment because they’re not familiar with it. You have no such excuse for your own servers.

The sign of a good DBA is having monitoring in addition to knowing how to troubleshoot performance issues. A great DBA will also use this information to look ahead and anticipate problems.

Steve Jones

If you don’t want to build your own system, be sure to take a look at SQL Monitor, a new tool from my employer, Red Gate Software. It can help you keep track of multiple servers and provide you the history and reporting to understand what is “normal” for your instances.

Security by Obscurity

This is an editorial reprint from Aug 23, 2005

I wrote awhile back about security through chaos, and that piece provoked some interesting responses. While I’m not sure I’d recommend it for every company, in some places it makes sense. I saw this Info World article on Security by Obscurity and it reminded me of what I’d written.

The article talks about some basic things you can do to that don’t seem like much, but the suggestions obscure things and ensure that not much on your system is as it would be expected. One simple thing they talk about is not installing to the default locations. That doesn’t sound like it would help much as there are always ways to read the registry or use environmental variables to find installations.

However it does work. How many pieces of software, including some SQL Server Service Packs, expect things to be installed on c:? How often have you been bitten by a “bug” in some software because you’d renamed or moved something?

Computer software depends on patterns in many cases to work. And we all use patterns to shorten development time. We reuse code, we cut and past way too much, and we often forget to make simple checks for things being moved around.

The same goes for virus and worm writers. The people who develop the technology might not be fooled, but so many script kiddies that use kits of modify some piece of code aren’t as savvy and don’t necessarily make these checks. I know that the administrator account has a particular SID that you can scan for, but I’d be willing to bet that most people would write a worm looking for “administrator”. Just think how much less of a problem SQL Slammer would have been if most people had moved SQL Server to some non-default port.

Simple obfuscating changes aren’t the answer to security issues, but they provide another layer of protection.

Little Devices

It seems that I get more and more done with my smartphone all the time. I find myself responding to lots of emails, having text or Twitter conversations, taking pictures, reading, even making notes on my iPhone. I can’t imagine every going back to a non-smartphone device. Whether it’s an iPhone, Android, or WinMo 7 device, I have found that they make me quite productive throughout the day.

I was just starting to get away from plain phones when I last worked for a large corporation. These days, I see many of my friends sporting smart phones, and getting work done on them. I even have one Windows administrator friend that carries an iPad around work, RDP’ing into servers to handle small tasks.

So I wonder how many database people can get things done. Since it’s Friday, I thought that it would make an interesting poll:

How much of your work could you get done from a mobile device.

I’m wondering if there is a chance that you can see a good portion of your administration, monitoring, or even daily work done from some device that isn’t a full size laptop? It could be a smartphone (Android, iOS, WinMo) or even a tablet (iPad/Galaxy) that allows you to actually knock things off your to-do list when you are away from your desk or don’t have a laptop.

If you’ve never used a mobile device for doing work, please don’t complain about them. They are great devices and they’ve made me very productive when I can’t sit at a desk or carry a laptop. I think many others agree, and I’m interested in seeing how use mobile devices.

Steve Jones

If you’re a DBA, you might want to check out SQL Monitor, from my employer, Red Gate Software. Maybe your boss will let you get a copy and an iPad to manage your servers.

Google Scale

I heard someone talking the other day about how Google needs to optimize their code. They really pay attention to low level operation of their software, actually having people spend time writing compilers and ensuring they generate efficient code. This is on top of the idea that they want to ensure data structures efficiently use space, there is no unnecessary network traffic, etc. The reasoning is that at “Google-scale” (Gs) if something takes 10% longer, that’s 10% more electricity they have to supply, 10% more machines, and 10% of the Google IT infrastructure budget is a big, big number.

I’ve never worked at Gs. In fact, I’ve never worked at 0.001Gs. Most of my work in software development has been to support dozens, or maybe hundreds of users. SQLServerCentral, which was a joint programming project, was my largest piece of software , and that was built on other pieces of software, with little contributions from me and lots of Andy Warren’s keyboard sweat in the early years. Despite working on small projects, I still have always learned to consider the little things.

When you grow up with compilers that take minutes, as in 10 minutes to compile a few hundred lines of code, you pay more attention to your algorithms. When you grow up with kb of memory, you use it wisely. When you work across a 300 baud modem, you don’t mess around with lots of data transfer. Even though many of those constraints don’t apply, I still try to be careful and watch my data types, watch my round trips, and be aware that there are actually bits being whipped across the ether or in and out of transistors in a not-so-little package on my computer system.

The argument about “just buying more hardware” has some validity, but it’s not an excuse for not improving your skill and becoming better at your craft. As my friend Jeff Moden often quotes, it doesn’t take any more time to do it right the first time. That’s true, but you need to know how to do it right, we at least well. For all of us, that should mean some regular effort to improve your skills and become better at your craft, just like Google looks to constantly optimize their own systems.

Steve Jones