Multiple Backup Files

I’ve been writing a little PowerShell lately that will back up databases, move files, and restore them. I’m often testing things, and having scripts to quickly and easily move files is very handy. After one of my posts recently, a reader asked if I’d considered multiple files and how to handle them in scripts. I confessed I hadn’t, mostly because I haven’t had to deal with them.

In my career, I’ve tended to work with small to medium sized databases. I’ve had young children for a large portion of my SQL Server career and had no desire to babysit multi-hour (or multi-day) restores when things break. I know some people have been through those situations, and good for them. I just know I’ve had enough issues with the low-GB sized database, and haven’t been interested in supporting TB sized systems.

However that likely wouldn’t be the case in the future. More and more companies are collecting and storing data that reaches into the TB, even in small companies. The rapid advances in sensors, development tools, and cheap storage means that many people are dealing with hundreds of GB in at least one of their databases. That means for a reasonable RTO, making quick backups, and maintaining good performance, multiple backup files are becoming a necessity.

Is that really the case? Data volumes are exploding, but you many of you using this feature? I wanted to see how many people have implemented, or at least thought of striped backups. The poll this week is:

Do you have any databases that benefit from backup to multiple files?

I’ve consulted with clients that accidentally produced striped backups and then lost one of the files. That’s never a good situation, and it’s bad news to have to give as a consultant. However, I’m sure many of you have consciously implemented striped backups because they can perform better than single file backups and make for quicker restores. Others of you may suspect (or have tested) striped backups will help your systems but haven’t gotten around to setting things up.

Let us know if this is a common feature you use, or is it still something esoteric that you have no need for. And if you have put striped backups in place, have you tested a striped restore? I certainly hope so. Let us know either way.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.7MB) podcast or subscribe to the feed at iTunes and LibSyn.

Powershell – Copy the Latest Backup

I got an email recently where someone asked me how they can refresh a dev environment with Powershell. I guess I’d written something about this in 2009, though that would have been for testing as Red Gate had already banned me from development on SQLServerCentral by that time.

I dug around and came up with a few partial scripts and cleaned them up for these posts. This post will look at getting the backup and a later one will examine the restore.

Finding the Latest Backup

I’ll assume that you make backups on a known path somewhere. My philosophy is that I want the machines to stand alone as much as possible. That means that I don’t want the source machine (the one making the backup) to be working on refreshing the backup elsewhere. I want a pull system.

For a high level overview, this process looks like this:

  • Search the backup path for files matching a pattern.
  • Find the most recent one, based on date.
  • Copy that most recent file to another location.

For the sake of consistency and easy, I want to copy the file with the same destination name every time. That will simplify my restore process, which I could easily then do in T-SQL.

Let’s examine how to do this. I’ve got a folder with a few backups in it.


For my PoSh, I’ll start by setting a variable to the path.

$backuppath = "D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup" 

Once I have this, I can now look for the files in this path. To do that, I’ll use Get-Children.

get-childitem -path $backuppath

This will return to me a list of the files. That’s what I want, but I want to limit the files to a pattern. In this case, I’m looking for .bak files, from the EncryptionPrimer database. All of these

There’s nothing special about what I do that’s not contained in plenty of places. I don’t have this running on an environment currently as someone else manages that process, but here’s the process I’ve followed in the past:

  • Find the latest backup (whatever the date) in the source folder.
  • Copy this with a set name to the destination folder, overwriting previous backups with the same name.
  • Restore the known name to the development database, moving files as needed.

I’ll go through each of these steps in my PoSh script.

Find the Latest Backup

This is fairly easy. I’ll use the Get-ChildItem method, which I found in a StackOverflow post. I’ll use a variable for the path I need, and then check the path.

$backuppath = "D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup"

get-childitem -path $backuppath

That works well, but since I’m building a process for a specific backup type, I’ll add a filter.

$backuppath = "D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup"

get-childitem -path $backuppath -Filter "EncryptionPrimer*.bak"


To find the latest backup, we’ll pipe the output through the Where-object filter, removing folders. Then we use sort-object to order things by creation date and select-object to get just the one file.

$backuppath = "D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup"

get-childitem -path $backuppath -Filter "EncryptionPrimer*.bak" |

    where-object { -not $_.PSIsContainer } |

    sort-object -Property $_.CreationTime |

    select-object -last 1


The last part of the script is the copy-item command, which is again the recipient of piped output. We give a standard name, and path (another variable).

$backuppath = "D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup"

$destpath = "d:\SQLServer\Backup"

get-childitem -path $backuppath -Filter "EncryptionPrimer*.bak" |

    where-object { -not $_.PSIsContainer } |

    sort-object -Property $_.CreationTime |

    select-object -last 1 | copy-item -Destination (join-path $destpath "EncryptionPrimer.BAK")

Once this is done we can restore things. I learned how to do this from PoSh using this post:

However, since I have a standard backup file name, I’d probably do this in T-SQL and set a job that I can just run anytime. It’s simpler and easier, and since most of the time I’d want to do this from SSMS, a job works well.

Here’s the PoSh script.

Import-Module "SQLPS" -DisableNameChecking

$sqlsvr = New-Object -TypeName  Microsoft.SQLServer.Management.Smo.Server("JollyGreenGiant\SQL2012")

$BackupFile = "D:\SQLServer\Backup\EncryptionPrimer.BAK"


#echo ""

#echo "Databases"

#echo "———"

#foreach ( $db in $sqlsvr.Databases) { write-host $ }

echo " "

echo "Begin Restore"

echo "============="

$Restore = New-Object "Microsoft.SqlServer.Management.Smo.Restore"

$Restore.NoRecovery = $false

$Restore.ReplaceDatabase = $true

$Restore.Action = "Database"

$Restore.PercentCompleteNotification = 10

$BackupDevice = New-Object ("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($BackupFile, "File")


$RestoreDetails = $Restore.ReadBackupHeader($sqlsvr)

$logicalFileNameList = $Restore.ReadFileList($sqlsvr)

$Restore.Database = $RestoreDetails.Rows[0]["DatabaseName"]

foreach($row in $logicalFileNameList) {

    $RestoreDBFile = new-object("Microsoft.SqlServer.Management.Smo.RelocateFile")

    $RestoreDBFile.LogicalFileName = $row["LogicalName"]

    $RestoreDBFile.PhysicalFileName = $row["PhysicalName"]




write-host ("Completed the Database Restore operation on server for Database " +  $RestoreDetails.Rows[0]["DatabaseName"] + " on server $server")


That’s it. I ran this a few times, and it worked well. A handy script to get the last backup and have it ready in a dev/test environment.

Backup Your Certificate for TDE

If you’ve enabled TDE, you need to be sure you have a copy of the certificate that protects the Database Encryption Key (DEK). If you follow my instructions, then you have one.

If you didn’t make a backup, or you have just discovered a TDE database, make one now, and secure the password you use with your DR materials (off site).

How do you make a backup? That’s easy. Use the BACKUP CERTIFICATE command. Here’s the command I use in demos:

USE master ; go BACKUP CERTIFICATE TDEPRimer_CertSecurity TO FILE = 'tdeprimer_cert' WITH PRIVATE KEY ( FILE = 'tdeprimer_cert.pvk', ENCRYPTION BY PASSWORD = 'AStr0ngB@ckUpP@ssw0rd4TDEcERT%') ; go


The certificate for TDE is in master, so you must make sure you’re in master for the backup. The TO FILE option lets you choose the file path. By default, this will be in the DATA folder for your instance, but you can choose other locations. You can give an extension if you like. This file is the certificate (public).

There is a private key portion of the certificate, which is backed up with the “WITH PRIVATE KEY” portion of the command. This is where you specify the password and provide the protection for your certificate.

You will need this password on restore, so keep track of it.

Restoring a TDE Database on a New Instance

You’ve enabled Transparent Data Encryption (TDE) on one of your databases, and the server has failed. How do you get this database working on a new instance? This short post will show you how this works.

Files Needed

There are two files you need in order to restore the database.

  1. A full backup of the TDE database
  2. A backup of the server certificate that protects the Database Encryption Key (DEK).

You might have multiple files for the backup, and potentially other backup files (diff, log), but the process for those will be the same as any other restore once you complete this process.

If you have multiple full backup files (striped backup), just include them in the restore command as you normally would.

Prepare the New Instance

To prepare the new instance for restore, you need to ensure that you have a database master key (DMK) in the master database. You can do this by checking the master_keys DMV.

SELECT * FROM sys.symmetric_keys


If you have a key, that’s fine. If you don’t, you can create one like this:


Now you need to restore the certificate from your source instance. You do this with the CREATE CERTIFICATE command, and the FROM FILE option. You’ll typically find the backup from your BACKUP CERTIFICATE command in the DATA folder for SQL Server if you didn’t specify a complete path.

You do have a backup, right?

For me, I’ll run the create certificate command:

CREATE CERTIFICATE TDEPRimer_CertSecurity FROM FILE = 'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup\tdeprimer_cert' WITH PRIVATE KEY ( FILE = 'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup\tdeprimer_cert.pvk', DECRYPTION BY PASSWORD = 'AStr0ngB@ckUpP@ssw0rd4TDEcERT%') ; go


This works


Now I’m ready to restore the TDE database.


I select my backup file, and everything proceeds as a normal restore. How do I know the certificate worked?

Because I have a “Ready” at the top of the dialog.


If I didn’t have the certificate on the instance, I’d get this:


Double clicking that would bring up the error:


If the certificate is not on the instance, then the server cannot decrypt the DEK and restore the database.

Hope this helps, and if you use TDE, make sure you can do this.

The DBA Team #1–Code and Slides

Our first DBA Team event, in Richmond, VA just before SQL Saturday #187 went well. Overall I think our experiment was a success and we’re already talking about where and when we might do this again.

In the meantime, we didn’t make a separate site for this series of events, being an experiment and all. I’m adding this post as a placeholder for the various slide decks and code.

Core Monitoring for SQL Server (Steve Jones)

Good DBAs ensure that they are always aware of the state of their instances. All systems should have monitoring in place, not just so you know when things go wrong, but so you understand what a normal workload looks like and can plan for the future. This session will cover the basics of monitoring a SQL Server system and the various metrics you should be tracking.

Getting Started with SQL Server Backup (Grant Fritchey)

Backups are fundamental to protecting the investment your business has in its data and they’re the foundation of disaster recovery planning. We’ll go over best practices for database backups, to ensure you’re establishing that foundation correctly within your systems. This introductory level session covers full, log, and differential backups, as well as restores and restores to a point in time. Come along to be sure you’ve got the right protection in place for your systems.

Understanding Database Corruption (Grant Fritchey)

A DBA’s primary purpose is to ensure that the information in their charge is accessible by the correct people within their organization. Despite everything you do to make sure you’ve got your servers configured, monitored, and tuned, with well-tested backups in place, you can still lose data through corruption. But what is corruption in a database? This session lays out exactly where database corruption can come from, how to find out exactly where the corruption is within a database, and  the methods you have available to recover from database corruption.

Indexing for SQL Server (Steve Jones)

Indexes are important for improving the performance of your queries, but they add overhead to your server and require maintenance. This session examines how indexes work and the basic maintenance that you should perform to ensure your system is running at its peak level.

The Backup Process

This used to be my backup process. Five copies, one for each day of the week.
This used to be my backup process. Five copies, one for each day of the week.

I was reading Scott Hanselman’s post onThe Computer Backup Rule of Three, and I completely agree with him. If things are important, three copies are the minimum you should keep. I have had backups before, but I’ve found that when something breaks, like a tape drive or hard drive, I might not get to replace it right away. Usually this is procrastination, or as it’s more commonly known, laziness. During that window of opportunity, I’ve had Mr. Murphy strike. My inherent paranoia of having that second extra copy has saved me, both personally (at home) and professionally (at work).

All too often I run into DBAs that seem to trust and assume their backups and processes are very reliable. After setting up a backup process, usually to disk, and testing it with a restore on another system, they assume it will work in the future. It should, but there is always the chance something will fail at some point in time. There’s the chance that some hardware failure or software reconfiguration will cause an issue with your process. There’s also the simple chance that your I/O system might introduce corruption into your backup files.

If you aren’t regularly testing your processes, you can’t be sure they are working as they have in the past. The ideal environment would perform a restore of every backup file taken, every day, but that isn’t always possible. However restoring a random backup once every month or two will help you to ensure your backup, and restore, processes, have the best chance of succeeding when you actual need to perform them because of a disaster.

This isn’t hard to script and automate, but it is something you need to do periodically. However even if you automate restores of all your databases, make sure you still practice your manual skills regularly. Automation might not work in a disaster situation, but those quarterly practice sessions restoring a database to a point in time with a tail log might just relax you when a crowd in your cube is asking if the database will be back up soon.

Steve Jones

The Voice of the DBA Podcasts

We publish three versions of the podcast each day for you to enjoy.

Home Backup for the DBA

I’ve been giving a Prepare for When Disaster Strikes talk at SQL in the City this year, and it’s designed to get you to think about problems before they happen and take steps to mitigate issues. It’s important at work, but it’s also important at home.

If you read the story of the Amazon/Apple hack, you might worry about the security of your information, and you should. However I was more concerned over the data loss, specifically pictures, when I read that account. I take a lot of pictures, all of them digital these days, along with various writings and videos, and I worry about preserving those for my own memories and for my kids. I don’t trust Facebook/Google/etc, to store them, so I needed my own solution.

I installed Windows Home Server (WHS) a few years back on an old Dell Dimension E520. It was a good workstation, and plenty powerful to run the previous version of WHS (prior to WHS 2011) on its’ AMD CPU. It worked well, but across two years I had 3 boot drive failures. Each time I needed to not only replace the drive, but reinstall WHS and then copy off and back my files from the data drives. A PIA.

I’d looked at Drobo, and other NAS type solutions, but none were very automated, rather expensive. I had a desktop, a laptop, my wife’s laptop, the kid’s iMac, and sometimes another machine to back up. After my 3rd boot drive failure, and the server sitting idle for a couple months, I decided to virtualize my WHS.

I had planned on using Win7 for a host, but the older desktop didn’t seem to want to install it with my new RAID card in it. Rather than futz with it, since it wasn’t critical and isn’t connected to the Internet on my home network, I installed WinXP and then Virtual Box. Inside of Virtual Box, I installed the older WHS software, putting my data on separate virtual drives that were protected by RAID. I used 2 separate arrays, which should give me some protection if any of them fail. The host WHS drive is also on a RAID array, which should give me some protection from drive failures.

And if my boot drive fails, I replace it, install WXP, Virtual Box, and then run my VM without messing with data.

I like WHS as a central place for us all to share pictures, video and music, and for backups. I’ve recovered a few files from the system that I had accidently deleted, and I know all our machines are protected right now.

It’s not perfect, and I really need an offsite solution for a second backup since I’m a DBA. I get a little paranoid about restoring things, and I know one copy isn’t good enough.

Where’s My Certificate Backup?

If you’re like me, you take advantage of the default backup paths in SQL Server. It makes my code cleaner, and if I need to move the instance somewhere else, all my code works. No pathing issues.

A certificate backup might look like this for me:

USE master
TO FILE = 'SteveCert'
    FILE = 'SteveCertPrivateKeyFile',

If I run this, and immediately go to my backup folder, sorting by the last modified date for files, I see this:


No certificate backup file. What happened?

The answer is actually documented, and you should be aware of this. In the BACKUP CERTIFICATE page, it says this: “The default is the path of the SQL Server DATA folder. “

That’s interesting, and it makes sense to me. This folder is more likely to be secured than the backup folder, where developers and who knows who else may have access to the folder. By limiting it in the data folder, you provide a little obfuscation, perhaps more protection, and you force the administrator, the DBA, to get the files.

However the files are also ACL protected. If I go to my data folder, I see the files.


If I select the certificate and CTRL+C (copy) it, and then go to the backup folder and try a CTRL+V (paste), I get this:


The service account has permissions to this file, not administrators by default. This action invokes the UAC command to require me to make a conscious decision to make this copy.

Of course, I can just provide a path to make sure I can find the file.

TO FILE = 'c:\SQLBackup\SteveCert'
    FILE = 'c:\SQLBackup\SteveCertPrivateKeyFile',

Whatever you do, make sure you backup your certificate files and keep them safe. If they go, you do lose data.

The Home Server is Back

After reading about the Amazon/Apple attack this morning, I grabbed a second cup of coffee and went in search of a monitor cord. We’d given a monitor to a friend recently, which was the one rarely used for the Home Server. It wasn’t a big deal since we always log into the Home Server remotely, but we had a power outage and the server went down. Since it’s virtualized, I needed a console to get it started again.

After getting a monitor connected and my virtual server set back up, I immediately kicked off a backup. It’s been down a few weeks, and I’d hate to lose a bunch of work if something happened to the other machines on my network.

I also enabled remote desktop access, so that I can more easily restart things in the future.

Back up your systems. You never know when disaster will strike, which could something as cruel as a hacker or virus wiping your system. As data professionals, we should be more cognizant of the impact of losing information.