The Writeable Warm Standby

I saw a question recently that went like this: I get one full backup from FTP. I’ll get daily log backups through FTP after this, but never another full. I need to restore this daily log backup and allow the group receiving the database to read/write the copy, and then reset it with the new log backup overnight.

First, this is untenable. At some point you’ll have some issue with transfer, lose a log, or the database will go corrupt. I can guanantee you that at some point you will need another full backup. Not every week, or even every month, but you will need one.

Second, this is a tough situation. I saw some answers, which I agreed with, but I started thinking about ways to get that data moved. My first thought it use STANDBY and move the data every day to a clean database. I’ve done this before, and in the GB range, even 100s of GB, this can work. It helps if you can whack indexes and constraints on the destination, but a copy of data table-by-table goes fast.

However then I thought about other ways. You can’t take a backup of a standby database, nor can you take a snapshot. However while searching, I saw an answer to this post on SO.

TL;DR: copy the mdf/ldf to a new database.

That was interesting, so I decided to test it. Turns out, it works pretty well. HOWEVER, it’s dangerous, and I think you should be very careful about this. I wouldn’t count on this being production stable, and certainly not data stable. You better have other copies of this data.

Here’s what I did. First, create a database with some data.

USE mydb;
CREATE TABLE mytable(id INT);
INSERT mytable SELECT 1;

Next, let’s back this up, take if offline, and then copy files.

USE master;
 TO DISK = 'mydb.bak';

Now we can copy the files to new files. There are UAC issues here, so you’ll need to add some rights to the files if you do this regularly. I left a comment in my script, but I actually did a CTRL+C,CTRL+V in the data folder, allowing the UAC permissions to work.

2015-12-11 08_42_22-Photos

Once that was done, I had new files:

2015-12-11 08_28_15-Photos

I tried to run at attach, but got a permissions error:

Msg 5120, Level 16, State 101, Line 30
Unable to open the physical file “D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA\mydb_reporting.mdf”. Operating system error 5: “5(Access is denied.)”.

The solution I found was to run SSMS as an administrator. Annoying, but it works.

At least for the OS error. However, then you get this:

Msg 1824, Level 16, State 1, Line 35 Cannot attach a database that was being restored.

You can’t do this. At least not easily.

You can do this. First, delete the files you copied over, then run this:

CREATE DATABASE mydb_reporting
alter database mydb_reporting set offline;

The next step is to delete the MDF and LDF files, which will be mydb_reporting.mdf and mydb_reporting_log.ldf  by default. I could specify other names, and would if this were something I needed to script.

Once those files were deleted, I’d next copy my files again and rename them. That would result in this:

  • mydb_reporting_base.mdf –> mydb_reporting.mdf
  • mydb_reporting_base_log.mdf –> mydb_reporting_log.ldf

Now I can go back to SSMS. In SSMS, I do a simple ALTER.


Then I can run this:

2015-12-11 09_03_28-Start

I have a copy of my database. Can I apply logs and move forward? Let’s try. First, let’s add data and make a log backup

USE mydb
INSERT mytable SELECT 99

BACKUP LOG mydb TO DISK = 'mydblog.trn'   

Next we restore again. We also set the databases offline again.

RESTORE LOG mydb_reporting_base
 FROM DISK = 'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup\mydblog.trn'
 WITH  MOVE N'mydb' TO N'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA\mydb_reporting_base.mdf'
    ,  MOVE N'mydb_log' TO N'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\DATA\mydb_reporting_base_log.ldf'
    , STANDBY = 'D:\SQLServer\MSSQL11.MSSQLSERVER\MSSQL\Backup\undo.log';
USE master
ALTER DATABASE mydb_reporting_base

Once again it’s file copy time. The UAC comes into play again as I copy and rename the _base files. However once that’s done, things work.

2015-12-11 10_17_02-Photos

This works, but I am not recommending this as something you should do, especially for critical systems. This can work, but it’s dangerous, and really not supported by MS.

The Log Chain

I ran across a question recently from someone that received a similar message to the one showed in this image:

2015-07-30 12_02_22-SQLQuery3.sql - ARISTOTLE.master (ARISTOTLE_Steve (62))_ - Microsoft SQL Server

In this case, the log chain is broken. I’ve received this message, usually from a script that is automatically restoring all the logs in a folder. I’ve also had clients call me with this message, saying their log shipping system is broken.

What has happened is that you (or someone) is restoring the logs out of order. Each log backup forms part of a sequence that we call the log chain. The restore process requires that each log backup be restored in the same sequence in which is was created.

In the error message above, SQL Server is letting you know that the Log Sequence Number (LSN) contained in the file from which a restore was attempted doesn’t match the last LSN in the restored database.

Fixing This

The fix is easy; restore the correct files in the correct order. In this case, I knew I’d taken 3 log backups, and I was restoring in the incorrect order. I’d restored log backup 1, and was now trying to restore log backup 3.

If I restored log backup 2 (using with norecovery), the restore would work. I could then restore log backup 3 and eventually bring the database online.

If you can’t find the correct log backup, then your database is still usable. Just run RESTORE DATABASE WITH RECOVERY, and you can access your data. However the data will only be correct to the point in time of the last successful restore.

That could be a serious problem if you are missing a log backup early in your restore sequence. It could be a career limiting problem.

Always be sure you keep all log backups since the earliest full backup you keep handy for restores and protect those files. Losing one could be a major problem for your organization.

Attaching All Databases with PowerShell–Refactoring out Write-Host

Someone posted a note to me on Twitter that noted that Write-Host is not recommended for your scripts. I checked the link to an MSDN blog on Write-Host Considered Harmful, and it made some sense. Basically it says that since Write-Host always goes to the console, any output sent through Write-Host can’t be consumed in a pipeline by other PoSh commandlets or processes.

At first I thought, what does that have to do with my script? I’m really just noting status information. However, the more I thought about it, the more I realized that it’s a minor change, and who knows? Maybe I’ll chain this in some other process, or more importantly, maybe someone else will.

Today I popped open the script in the PowerShell ISE and did this:


That’s an easy fix. Just write the output to the pipeline, and if there’s nothing consuming output, I get it on the screen.

I also refactored a bit more. I added a “Debug x:” line to each Write-Output command, with x replaced by the appropriate debug level I’d checked for. This way I know what debugging output is being returned to the calling screen.

I also found a few lines that were just output, using “Attaching as…” code. I replaced those with Write-Output.

Attaching All Databases with PowerShell – The Overview

TL;DR Script is here: Git Hub Powershell Scripts. It’s the attachdbs.ps1 and will attach all databases in a folder to a SQL Server instance, if they don’t exist.

I wrote a PowerShell script recently to actually accomplish a task I that I needed. What’s more, this was the first time I thought that Powershell might prove more useful than other methods. This series looks at my script, and this part examines the first part that I wrote.

After my problems with Windows 8.1 and my reinstallation of SQL Server, I had a problem. I had no databases.

I had the files. I had backup files. However the instance didn’t have any databases registered. I started down this path.


However that seemed inefficient. I actually had a pattern of things that I knew needed to be done, I had a bunch of repeatable work, this sounded like it should be a PowerShell type task. I could have done it in T-SQL, or grabbed a script from SQLServerCentral, but it made more sense to load databases with PowerShell.

The Start

Of course I started Googling, but didn’t see any posts that shower someone with mdf/ldf files and needing to attach them to an instance without knowing what you had. What I had was an instance, with no backup/restore/detach history.


I also had a bunch of mdf/ldf files in a folder. As well as some folders for Filestream/Filetable information.


What did I do? I’ve got the script on GitHub, and you can grab the latest version at: Powershell Scripts (choose the attachdbs.ps1 file)

This post will give an overview of what I needed to do and I’ll post more details about how I built the script in pieces. The overview of the process is:

  • Get all MDF Files in a folder
  • Connect to a SQL Server instance and loop through all databases
  • If a file name (less the .mdf) does not exist as a database, track this.
  • Get the log file associated with an mdf
  • Attach the mdf and ldf files to the SQL Server.

That’s what I needed to do and development went in those stages. Certainly there were issues, but I got it working as of this post. When I ran my script, I saw these results:


In SSMS, I had my databases.


I even had my Filestream stuff in place. SQL Server handled that for me.


I’ll include other posts that talk about the details of how I build this, which took about 3 hours one day, and an hour the next.


Here are a few posts where I picked up bits and pieces of what I needed to do.

The Development Backup

Have you ever had a development server crash? Have you lost work because of this? Had delays or had to recreate code? You shouldn’t, or at least you shouldn’t lose much work or time..

There was a time when I offered to manage backups on all development servers. This was in a large environment with hundreds of instances.  I wasn’t worried. I had scripts to do the work of setting up, running, and reporting on backups for instances. I knew how to deploy these scripts to hundreds of servers.

My reasoning was the our development servers were really our manufacturing environment for software. Wouldn’t you ensure your machinery was well maintained and kept in top condition if you had a factory? I know I would.

The developers passed and once in awhile they’d call and ask of we could recover a server. 

“Do you have backups?,” I’d ask. “No” was the usual reply. I’d appligize and reiterate my offer to manage the system. They were always resistent and that was fine. They were responsible, and these were their systems. However they had a backup system already. They just didn’t use it.

Almost all of these people were using a version control system (VCS) for their code, but not for database code. Do me a favor; put your database object code in source control. Add all your DDL for tables, views, functions, stored procedures, and anything else you use.

As long as it’s on a different physical machine than the development server, you’ll thank me one day.

Just as long as you also run backups of that VCS database.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.7MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at


Backup Responsibility

One of the most important things that you can do as a DBA, or really as any sort of system administrator, is back up your system.Ensuring that you have have backups, and of course, that you can restore them, is the number one priority for sysadmins. Everything else that you need to do is second to backups. After all, backups ensure you still have a system after a disaster. If you can’t do that, then security, performance, features, none of that matters.

I have worked in large and small environments, and in all cases, I’ve assumed that as the DBA, I need to be checking that backups are occurring, and that I can restore them in case of any issues. Often this has meant I need to work more closely with others that have the actual responsibility for performing backups. This week, I’m wondering how many of you work in similar situations.

Who is responsible for backups in your company?

Is it the sysadmin of each particular application? Does the DBA ensure database backups while the Exchange administrator handles mail backups? Do you have a centralized system for backups? If backups fail, who’s going to get yelled at? Or perhaps more importantly, who will notice that backups have failed?

There are any number of ways to handle backups, and honestly, the best way I’ve seen had a centralized person responsible for running backups every day and checking on automated tasks, but the individual system owners (DBAs, Exchange admins, application managers) checking that backups had been made. These individuals also test restores periodically. In this way there was always someone to double check the person responsible.

Let us know this week how things work in your environment.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

How Often Do I Backup my Log?

Do you know how often to back up your transaction log in SQL Server? Most of us have stock answers, hopefully answers that have time intervals like every hour. I worry that most people have a time interval of “never” because their answer is “I make full backups and don’t need to backup logs” or “What’s a transaction log?” That’s a different discussion and if those are your answers, I have an article for you to read.

Managing a transaction log is a bit tricky and not straightforward. I think far too many people manage their logs based on the space they are trying to maintain on disk. However that’s not the way you decide when to back up logs. The way you decide how to back up a log has nothing to do with space. It’s best stated in this quote from Gail Shaw: “the log backup interval should be less than the allowable data loss for the system.”

Simply put, decide how much data you can afford to lose. Or how much loss will exceed your tolerance for being yelled at. That’s your log backup interval. Make a log backup after that much time passes. Schedule that interval into your maintenance operations and observe how big the log backups are. That will determine your log size, not the other way around.

As with most guidance and advice, this is based on a time and place in the platform’s lifecycle. This is the advice I’ve used from SQL Server 7 to 2012, but it’s subject to change, so make sure it’s still valid if you’re reading this in 2020.

And, of course, make sure that you also understand how to restore these backups in case you do have an issue. That’s probably the most important thing to know.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and Mevio . feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

A Full Backup Includes Everything (with a caveat)

Full database backups in SQL Server include all of the data, objects, tables, rows, functions, stored procedures, etc. If something is in the database when the data reading portion of the backup concludes, it’s in there.

Note that “in there” means committed in a transaction.

If someone tells you the backup missed a row, or a procedure, or something else, they’re almost always wrong. 99.9999% of the time they are wrong, and you should stand by that.

The issue is that things must be committed in the backup. If they aren’t committed, they aren’t included. And that means that there’s a small chance that something added to the database while the backup is running isn’t in the backup.

If you remember how backups work, there’s a data reading portion of the backup and a log writing portion of the backup. The log writing portion of the backup takes a physical amount of time. If someone were to finish a transaction during this time, the data or objects would not be in a restored database. This is because the transaction didn’t exist or wasn’t committed when the data reading portion of the backup completed.

If the data writing portion of the backup takes a few minutes, and a change was in the last minute or so of the process, someone might think something completed before the timestamp on the backup file is included. It wouldn’t be.

It’s a small chance, and it’s not likely to come down to this point, but it could happen. Can you figure out the transactionally consistent time of the backup? Perhaps, but I don’t know how. You’d need to get the time for the last LSN written in the backup and map that to a time. If you know how to do that, let me know.

Make sure that your backups are transactionally consistent. Don’t export, don’t use open file managers, don’t use anything that doesn’t respect transactions. The native SQL Server backup process does this. If you want a few other features, my employer makes SQL Backup Pro, which also respects transactions.

Accidently Kicking a Database into the Restoring State

I learn new things all the time. This was one that actually stunned me. Huge props to Gail Shaw for posting a note about this in a thread.

Run this code:

CREATE DATABASE MyRestoreTest ; GO USE MyRestoreTest go BACKUP DATABASE MyRestoreTest TO DISK ='myrestoretest.bak'; GO CREATE TABLE mytable( id INT) ; GO USE master go BACKUP LOG myrestoretest TO DISK = 'myrestoretest_log.trn' WITH norecovery

You’ll see this in your Object Explorer



I haven’t started a restore. I’ve run a backup. Apparently this causes problems, as noted by Gail in the thread. Sure enough, it’s documented in the BACKUP command, in the Log-specific Options.

I had never scheduled backups with this type of option, but you might have a job that does this if you were preparing for a failover. Having a script ready it a good idea, but if it executes unexpectedly, this could happen.

The lesson: make sure you know the options when you run a command. Always test, and if something strange happens, search or ask what might have happened.

Backups and Consistency

I wrote about transactional consistency recently. Why do we need this?

Imagine that I have an orders table and an ordersdetail table in my database. The orders table has the order header (date, customer, etc) and the detail table includes information about the items purchased (product, qty, price). Let’s say my orders table has 10 orders and my detail table has 5 items for each order for a total of 50 rows in the detail table.

If I want a backup that is consistent, I need to have all of the orders and details included in the backup. I don’t want partial orders, so all my inserts for orders and details are included in a transaction. That means my backup needs to contain a complete, new order, or no new order at all. Anything else wouldn’t be consistent.

The backup process you choose needs to account for this. It cannot include partial orders from a transaction inside of it. Could that happen?

Sure, and let’s see how. Backups proceed linearly and it takes time for them to complete. Imagine that I have a process that starts backing up my Orders table. It reads orders 1 and 2.

While this is happening, someone inserts a new order, #6, into the system. If my backup system is reading pages, it might continue through orders 3-6 and then start on details. Imagine now the user rolls back the new order. When my backup completes, it will include (and restore) an order #6. That could be a problem.

The other problem could be that the backup process gets to the details table and starts backing up the first 5 details from order#1. While this is happening, a user inserts a new order and order details in a transaction (order #7), and it completes. The backup process moves through the details table and gets all the details, including the new detail records inserted during the backup. However the orders table has already been backed up. The restore will include details for order #7, but not the header (Orders table).

That’s bad.

Don’t use file backups, no open file managers, no exports, no BACPACs as backups for your data. You might have serious data issues on restore, which get worse as your workload is busier on the system.

Make sure that your backups are transactionally consistent. Don’t export, don’t use open file managers, don’t use anything that doesn’t respect transactions. The native SQL Server backup process does this. If you want a few other features, my employer makes SQL Backup Pro, which also respects transactions.