Backups Over Time

I’ve written about backups at various times, including responsibility, problems with systems, and home protection. This is a subject that is important to me, as I consider the responsibility for protecting data to be first and foremost. Security, performance and more all matter, but backup and restores are the most important items to pay attention to.
As data professionals, whether looking at organizational systems or our personal data, we will find that our backup strategies change over time. We may also find that systems come and go, and it can be easy to forget about older systems. I know I’ve had to track down tapes and restore decommissioned systems years after they were reformatted or powered off.
I ran across an interesting post from someone that went through and found their old backup media and moved it all to newer media, as well as cloud storage. While I’m not sure that I really want to go through old hard drives and keep old code or data, it’s an interesting exercise to think about.
Do you worry about losing data from old backups? This probably applies more to home systems than organizations, but perhaps you have regulatory requirements to keep seven (or more) years of backups around. Maybe you want to be sure that your old code, projects, pictures, and more are saved elsewhere. Maybe you even want to ensure that newer formats of media are being used.
What techniques might you use to accomplish this? I know I have a Windows Home Server that receives copies of pictures/music/video and a Crashplan account that backs up various machines in the house. That seems to work well, though I do consider taking those pictures/video and putting them on DVDs for long term storage out of the house. I’m hoping that .jpg and other formats don’t go out of popularity anytime soon.
Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

Converting SQL Backup Files to MTF

I got a copy of some backup files recently and needed to restore them. However, I don’t have SQL Backup installed on all my instances. It’s not a big deal to install it, but since this is a one-off, I decided to just decompress the files.

The documentation for SQL Backup Pro notes that you will have the file converter installed with your SQL Backup installation. The path given is the default, but if you’re like me, you need to track down the actual path. Mine is actually on my e: drive, which means I need to use the full path to call the program.

The converter is a command line tool, so I need to open a command prompt and then type the path to the file. I could to this many ways, but I started by going to the folder with my .sqb files. I then typed this:

2016-02-16 11_45_50-Netflix

This actually worked, but it then put the decrompressed files in the folder with the SQL Backup utility. I then adjusted the command to this:

“E:\Program Files\Red Gate\SQL Backup 7″\sqbconverter FULL_
INS1_SQLServerCentral_20160210_000500.sqb H:\SQL Server\Backup\sqlservercentral_20160209.bak MyPassword
This command actually is seen in the help if you type sqbconverter with no parameters. The format is the .exe, then the input file (the .sqb), the output file (the .bak) and the password.

The utility gives you the progress as the files are decompressed.

2016-02-16 10_12_23-Netflix

Depending on the threads used, you’ll end up with multiple files. In my case, 7 files.

Now I can restore each of these as a normal, striped backup.

Restoring a Striped Backup–#SQLNewBlogger

Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers.

Recently I uncompressed some SQL Backup Pro files. Since multiple threads were used to make the backup, I ended up with multiple files in my file system, as shown here:

2016-02-16 11_58_57-Backup

Each of these is part of a striped backup, a piece of a backup file. To restore the backup, I need all the files to reassemble the backup. This is fairly simple, but you should be aware of how this works and how to perform a restore.

In my case, you can see I have 7 files for each database. They are the same name with an _0x at the end, with x being the number of the file.

To restore these, I go to SSMS and click the restore choice. That gives me my restore database dialog, where I can select that I’m restoring from devices. As you can see below, no devices (files) are selected.

2016-02-16 12_00_56-Restore Database -

I then click the ellipsis to get a dialog allowing me to add files.

2016-02-16 12_01_03-Select backup devices

Hopefully you’ve done this before, and you can click “add” to add files. You need to naviate to the location of your backup files if it isn’t the default.

2016-02-16 12_02_02-Locate Backup File - JOLLYGREENGIANT_SQL2014

Next you can select the files. Holding down CTRL, I can multi-select files.

2016-02-16 12_02_12-Locate Backup File - JOLLYGREENGIANT_SQL2014

Once I pick them, I click OK and then I see them all in the device dialog.

2016-02-16 12_02_19-Select backup devices

Now I click OK and SQL Server reads the headers, and I end up with a single database to be restore, as shown below.

2016-02-16 12_04_24-Restore Database - SQLServerCentral

Now, I can click OK, and often do in development areas. HOWEVER, if you are doing this in production, please, please, please, click the Script button instead. You’ll get a new query window, and you can cancel out of this dialog to see the code.

2016-02-16 12_16_07-SQLQuery7.sql - JOLLYGREENGIANT_SQL2014.master (JOLLYGREENGIANT_sjones (59))_ -

From here, you should click “Save” and save this, then execute it.

As you can see above, the statement is simple. List each disk file, separated by a comma. Then the rest of the RESTORE statement is a standard restore.

SQLNewBlogger

This is a fairly simple task, one I’ve done dozens of times, but it’s worth practicing. If you want to write about it, what happens if you’re missing a file? What if you change the order of files? This was a 10 minute blog, and it’s a good chance to play and practice your restore skills, which are as important as anything you do as a DBA.

The Writeable Warm Standby

I saw a question recently that went like this: I get one full backup from FTP. I’ll get daily log backups through FTP after this, but never another full. I need to restore this daily log backup and allow the group receiving the database to read/write the copy, and then reset it with the new log backup overnight.

First, this is untenable. At some point you’ll have some issue with transfer, lose a log, or the database will go corrupt. I can guanantee you that at some point you will need another full backup. Not every week, or even every month, but you will need one.

Second, this is a tough situation. I saw some answers, which I agreed with, but I started thinking about ways to get that data moved. My first thought it use STANDBY and move the data every day to a clean database. I’ve done this before, and in the GB range, even 100s of GB, this can work. It helps if you can whack indexes and constraints on the destination, but a copy of data table-by-table goes fast.

However then I thought about other ways. You can’t take a backup of a standby database, nor can you take a snapshot. However while searching, I saw an answer to this post on SO.

TL;DR: copy the mdf/ldf to a new database.

That was interesting, so I decided to test it. Turns out, it works pretty well. HOWEVER, it’s dangerous, and I think you should be very careful about this. I wouldn’t count on this being production stable, and certainly not data stable. You better have other copies of this data.

Here’s what I did. First, create a database with some data.

CREATE DATABASE mydb;
GO
USE mydb;
GO
CREATE TABLE mytable(id INT);
GO
INSERT mytable SELECT 1;
GO

Next, let’s back this up, take if offline, and then copy files.

USE master;
GO
BACKUP DATABASE mydb
 TO DISK = 'mydb.bak';
GO

Now we can copy the files to new files. There are UAC issues here, so you’ll need to add some rights to the files if you do this regularly. I left a comment in my script, but I actually did a CTRL+C,CTRL+V in the data folder, allowing the UAC permissions to work.

2015-12-11 08_42_22-Photos

Once that was done, I had new files:

2015-12-11 08_28_15-Photos

I tried to run at attach, but got a permissions error:

Msg 5120, Level 16, State 101, Line 30
Unable to open the physical file “D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA\mydb_reporting.mdf”. Operating system error 5: “5(Access is denied.)”.

The solution I found was to run SSMS as an administrator. Annoying, but it works.

At least for the OS error. However, then you get this:

Msg 1824, Level 16, State 1, Line 35 Cannot attach a database that was being restored.

You can’t do this. At least not easily.

You can do this. First, delete the files you copied over, then run this:

CREATE DATABASE mydb_reporting
go
alter database mydb_reporting set offline;

The next step is to delete the MDF and LDF files, which will be mydb_reporting.mdf and mydb_reporting_log.ldf  by default. I could specify other names, and would if this were something I needed to script.

Once those files were deleted, I’d next copy my files again and rename them. That would result in this:

  • mydb_reporting_base.mdf –> mydb_reporting.mdf
  • mydb_reporting_base_log.mdf –> mydb_reporting_log.ldf

Now I can go back to SSMS. In SSMS, I do a simple ALTER.

ALTER DATABASE MYDB_Reporting SET ONLINE

Then I can run this:

2015-12-11 09_03_28-Start

I have a copy of my database. Can I apply logs and move forward? Let’s try. First, let’s add data and make a log backup

USE mydb
GO
INSERT mytable SELECT 99
GO

BACKUP LOG mydb TO DISK = 'mydblog.trn'   
GO

Next we restore again. We also set the databases offline again.

RESTORE LOG mydb_reporting_base
 FROM DISK = 'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup\mydblog.trn'
 WITH  MOVE N'mydb' TO N'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA\mydb_reporting_base.mdf'
    ,  MOVE N'mydb_log' TO N'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA\mydb_reporting_base_log.ldf'
    , STANDBY = 'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup\undo.log';
GO
USE master
GO
ALTER DATABASE mydb_reporting_base
 SET OFFLINE;
GO
ALTER DATABASE mydb_reporting SET OFFLINE
GO

Once again it’s file copy time. The UAC comes into play again as I copy and rename the _base files. However once that’s done, things work.

2015-12-11 10_17_02-Photos

This works, but I am not recommending this as something you should do, especially for critical systems. This can work, but it’s dangerous, and really not supported by MS.

The Log Chain

I ran across a question recently from someone that received a similar message to the one showed in this image:

2015-07-30 12_02_22-SQLQuery3.sql - ARISTOTLE.master (ARISTOTLE_Steve (62))_ - Microsoft SQL Server

In this case, the log chain is broken. I’ve received this message, usually from a script that is automatically restoring all the logs in a folder. I’ve also had clients call me with this message, saying their log shipping system is broken.

What has happened is that you (or someone) is restoring the logs out of order. Each log backup forms part of a sequence that we call the log chain. The restore process requires that each log backup be restored in the same sequence in which is was created.

In the error message above, SQL Server is letting you know that the Log Sequence Number (LSN) contained in the file from which a restore was attempted doesn’t match the last LSN in the restored database.

Fixing This

The fix is easy; restore the correct files in the correct order. In this case, I knew I’d taken 3 log backups, and I was restoring in the incorrect order. I’d restored log backup 1, and was now trying to restore log backup 3.

If I restored log backup 2 (using with norecovery), the restore would work. I could then restore log backup 3 and eventually bring the database online.

If you can’t find the correct log backup, then your database is still usable. Just run RESTORE DATABASE WITH RECOVERY, and you can access your data. However the data will only be correct to the point in time of the last successful restore.

That could be a serious problem if you are missing a log backup early in your restore sequence. It could be a career limiting problem.

Always be sure you keep all log backups since the earliest full backup you keep handy for restores and protect those files. Losing one could be a major problem for your organization.

Attaching All Databases with PowerShell–Refactoring out Write-Host

Someone posted a note to me on Twitter that noted that Write-Host is not recommended for your scripts. I checked the link to an MSDN blog on Write-Host Considered Harmful, and it made some sense. Basically it says that since Write-Host always goes to the console, any output sent through Write-Host can’t be consumed in a pipeline by other PoSh commandlets or processes.

At first I thought, what does that have to do with my script? I’m really just noting status information. However, the more I thought about it, the more I realized that it’s a minor change, and who knows? Maybe I’ll chain this in some other process, or more importantly, maybe someone else will.

Today I popped open the script in the PowerShell ISE and did this:

posh_a

That’s an easy fix. Just write the output to the pipeline, and if there’s nothing consuming output, I get it on the screen.

I also refactored a bit more. I added a “Debug x:” line to each Write-Output command, with x replaced by the appropriate debug level I’d checked for. This way I know what debugging output is being returned to the calling screen.

I also found a few lines that were just output, using “Attaching as…” code. I replaced those with Write-Output.

Attaching All Databases with PowerShell – The Overview

TL;DR Script is here: Git Hub Powershell Scripts. It’s the attachdbs.ps1 and will attach all databases in a folder to a SQL Server instance, if they don’t exist.

I wrote a PowerShell script recently to actually accomplish a task I that I needed. What’s more, this was the first time I thought that Powershell might prove more useful than other methods. This series looks at my script, and this part examines the first part that I wrote.

After my problems with Windows 8.1 and my reinstallation of SQL Server, I had a problem. I had no databases.

I had the files. I had backup files. However the instance didn’t have any databases registered. I started down this path.

attach_a

However that seemed inefficient. I actually had a pattern of things that I knew needed to be done, I had a bunch of repeatable work, this sounded like it should be a PowerShell type task. I could have done it in T-SQL, or grabbed a script from SQLServerCentral, but it made more sense to load databases with PowerShell.

The Start

Of course I started Googling, but didn’t see any posts that shower someone with mdf/ldf files and needing to attach them to an instance without knowing what you had. What I had was an instance, with no backup/restore/detach history.

attach_b

I also had a bunch of mdf/ldf files in a folder. As well as some folders for Filestream/Filetable information.

attach_c

What did I do? I’ve got the script on GitHub, and you can grab the latest version at: Powershell Scripts (choose the attachdbs.ps1 file)

This post will give an overview of what I needed to do and I’ll post more details about how I built the script in pieces. The overview of the process is:

  • Get all MDF Files in a folder
  • Connect to a SQL Server instance and loop through all databases
  • If a file name (less the .mdf) does not exist as a database, track this.
  • Get the log file associated with an mdf
  • Attach the mdf and ldf files to the SQL Server.

That’s what I needed to do and development went in those stages. Certainly there were issues, but I got it working as of this post. When I ran my script, I saw these results:

attach_f

In SSMS, I had my databases.

attach_d

I even had my Filestream stuff in place. SQL Server handled that for me.

attach_e

I’ll include other posts that talk about the details of how I build this, which took about 3 hours one day, and an hour the next.

References

Here are a few posts where I picked up bits and pieces of what I needed to do.

The Development Backup

Have you ever had a development server crash? Have you lost work because of this? Had delays or had to recreate code? You shouldn’t, or at least you shouldn’t lose much work or time..

There was a time when I offered to manage backups on all development servers. This was in a large environment with hundreds of instances.  I wasn’t worried. I had scripts to do the work of setting up, running, and reporting on backups for instances. I knew how to deploy these scripts to hundreds of servers.

My reasoning was the our development servers were really our manufacturing environment for software. Wouldn’t you ensure your machinery was well maintained and kept in top condition if you had a factory? I know I would.

The developers passed and once in awhile they’d call and ask of we could recover a server. 

“Do you have backups?,” I’d ask. “No” was the usual reply. I’d appligize and reiterate my offer to manage the system. They were always resistent and that was fine. They were responsible, and these were their systems. However they had a backup system already. They just didn’t use it.

Almost all of these people were using a version control system (VCS) for their code, but not for database code. Do me a favor; put your database object code in source control. Add all your DDL for tables, views, functions, stored procedures, and anything else you use.

As long as it’s on a different physical machine than the development server, you’ll thank me one day.

Just as long as you also run backups of that VCS database.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.7MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

 

Backup Responsibility

One of the most important things that you can do as a DBA, or really as any sort of system administrator, is back up your system.Ensuring that you have have backups, and of course, that you can restore them, is the number one priority for sysadmins. Everything else that you need to do is second to backups. After all, backups ensure you still have a system after a disaster. If you can’t do that, then security, performance, features, none of that matters.

I have worked in large and small environments, and in all cases, I’ve assumed that as the DBA, I need to be checking that backups are occurring, and that I can restore them in case of any issues. Often this has meant I need to work more closely with others that have the actual responsibility for performing backups. This week, I’m wondering how many of you work in similar situations.

Who is responsible for backups in your company?

Is it the sysadmin of each particular application? Does the DBA ensure database backups while the Exchange administrator handles mail backups? Do you have a centralized system for backups? If backups fail, who’s going to get yelled at? Or perhaps more importantly, who will notice that backups have failed?

There are any number of ways to handle backups, and honestly, the best way I’ve seen had a centralized person responsible for running backups every day and checking on automated tasks, but the individual system owners (DBAs, Exchange admins, application managers) checking that backups had been made. These individuals also test restores periodically. In this way there was always someone to double check the person responsible.

Let us know this week how things work in your environment.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.

How Often Do I Backup my Log?

Do you know how often to back up your transaction log in SQL Server? Most of us have stock answers, hopefully answers that have time intervals like every hour. I worry that most people have a time interval of “never” because their answer is “I make full backups and don’t need to backup logs” or “What’s a transaction log?” That’s a different discussion and if those are your answers, I have an article for you to read.

Managing a transaction log is a bit tricky and not straightforward. I think far too many people manage their logs based on the space they are trying to maintain on disk. However that’s not the way you decide when to back up logs. The way you decide how to back up a log has nothing to do with space. It’s best stated in this quote from Gail Shaw: “the log backup interval should be less than the allowable data loss for the system.”

Simply put, decide how much data you can afford to lose. Or how much loss will exceed your tolerance for being yelled at. That’s your log backup interval. Make a log backup after that much time passes. Schedule that interval into your maintenance operations and observe how big the log backups are. That will determine your log size, not the other way around.

As with most guidance and advice, this is based on a time and place in the platform’s lifecycle. This is the advice I’ve used from SQL Server 7 to 2012, but it’s subject to change, so make sure it’s still valid if you’re reading this in 2020.

And, of course, make sure that you also understand how to restore these backups in case you do have an issue. That’s probably the most important thing to know.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and Mevio . feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com.