The Log Chain

I ran across a question recently from someone that received a similar message to the one showed in this image:

2015-07-30 12_02_22-SQLQuery3.sql - ARISTOTLE.master (ARISTOTLE_Steve (62))_ - Microsoft SQL Server

In this case, the log chain is broken. I’ve received this message, usually from a script that is automatically restoring all the logs in a folder. I’ve also had clients call me with this message, saying their log shipping system is broken.

What has happened is that you (or someone) is restoring the logs out of order. Each log backup forms part of a sequence that we call the log chain. The restore process requires that each log backup be restored in the same sequence in which is was created.

In the error message above, SQL Server is letting you know that the Log Sequence Number (LSN) contained in the file from which a restore was attempted doesn’t match the last LSN in the restored database.

Fixing This

The fix is easy; restore the correct files in the correct order. In this case, I knew I’d taken 3 log backups, and I was restoring in the incorrect order. I’d restored log backup 1, and was now trying to restore log backup 3.

If I restored log backup 2 (using with norecovery), the restore would work. I could then restore log backup 3 and eventually bring the database online.

If you can’t find the correct log backup, then your database is still usable. Just run RESTORE DATABASE WITH RECOVERY, and you can access your data. However the data will only be correct to the point in time of the last successful restore.

That could be a serious problem if you are missing a log backup early in your restore sequence. It could be a career limiting problem.

Always be sure you keep all log backups since the earliest full backup you keep handy for restores and protect those files. Losing one could be a major problem for your organization.

Attaching All Databases with PowerShell–Refactoring out Write-Host

Someone posted a note to me on Twitter that noted that Write-Host is not recommended for your scripts. I checked the link to an MSDN blog on Write-Host Considered Harmful, and it made some sense. Basically it says that since Write-Host always goes to the console, any output sent through Write-Host can’t be consumed in a pipeline by other PoSh commandlets or processes.

At first I thought, what does that have to do with my script? I’m really just noting status information. However, the more I thought about it, the more I realized that it’s a minor change, and who knows? Maybe I’ll chain this in some other process, or more importantly, maybe someone else will.

Today I popped open the script in the PowerShell ISE and did this:


That’s an easy fix. Just write the output to the pipeline, and if there’s nothing consuming output, I get it on the screen.

I also refactored a bit more. I added a “Debug x:” line to each Write-Output command, with x replaced by the appropriate debug level I’d checked for. This way I know what debugging output is being returned to the calling screen.

I also found a few lines that were just output, using “Attaching as…” code. I replaced those with Write-Output.

Attaching All Databases with PowerShell – The Overview

TL;DR Script is here: Git Hub Powershell Scripts. It’s the attachdbs.ps1 and will attach all databases in a folder to a SQL Server instance, if they don’t exist.

I wrote a PowerShell script recently to actually accomplish a task I that I needed. What’s more, this was the first time I thought that Powershell might prove more useful than other methods. This series looks at my script, and this part examines the first part that I wrote.

After my problems with Windows 8.1 and my reinstallation of SQL Server, I had a problem. I had no databases.

I had the files. I had backup files. However the instance didn’t have any databases registered. I started down this path.


However that seemed inefficient. I actually had a pattern of things that I knew needed to be done, I had a bunch of repeatable work, this sounded like it should be a PowerShell type task. I could have done it in T-SQL, or grabbed a script from SQLServerCentral, but it made more sense to load databases with PowerShell.

The Start

Of course I started Googling, but didn’t see any posts that shower someone with mdf/ldf files and needing to attach them to an instance without knowing what you had. What I had was an instance, with no backup/restore/detach history.


I also had a bunch of mdf/ldf files in a folder. As well as some folders for Filestream/Filetable information.


What did I do? I’ve got the script on GitHub, and you can grab the latest version at: Powershell Scripts (choose the attachdbs.ps1 file)

This post will give an overview of what I needed to do and I’ll post more details about how I built the script in pieces. The overview of the process is:

  • Get all MDF Files in a folder
  • Connect to a SQL Server instance and loop through all databases
  • If a file name (less the .mdf) does not exist as a database, track this.
  • Get the log file associated with an mdf
  • Attach the mdf and ldf files to the SQL Server.

That’s what I needed to do and development went in those stages. Certainly there were issues, but I got it working as of this post. When I ran my script, I saw these results:


In SSMS, I had my databases.


I even had my Filestream stuff in place. SQL Server handled that for me.


I’ll include other posts that talk about the details of how I build this, which took about 3 hours one day, and an hour the next.


Here are a few posts where I picked up bits and pieces of what I needed to do.

The Development Backup

Have you ever had a development server crash? Have you lost work because of this? Had delays or had to recreate code? You shouldn’t, or at least you shouldn’t lose much work or time..

There was a time when I offered to manage backups on all development servers. This was in a large environment with hundreds of instances.  I wasn’t worried. I had scripts to do the work of setting up, running, and reporting on backups for instances. I knew how to deploy these scripts to hundreds of servers.

My reasoning was the our development servers were really our manufacturing environment for software. Wouldn’t you ensure your machinery was well maintained and kept in top condition if you had a factory? I know I would.

The developers passed and once in awhile they’d call and ask of we could recover a server. 

“Do you have backups?,” I’d ask. “No” was the usual reply. I’d appligize and reiterate my offer to manage the system. They were always resistent and that was fine. They were responsible, and these were their systems. However they had a backup system already. They just didn’t use it.

Almost all of these people were using a version control system (VCS) for their code, but not for database code. Do me a favor; put your database object code in source control. Add all your DDL for tables, views, functions, stored procedures, and anything else you use.

As long as it’s on a different physical machine than the development server, you’ll thank me one day.

Just as long as you also run backups of that VCS database.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.7MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at


Backup Responsibility

One of the most important things that you can do as a DBA, or really as any sort of system administrator, is back up your system.Ensuring that you have have backups, and of course, that you can restore them, is the number one priority for sysadmins. Everything else that you need to do is second to backups. After all, backups ensure you still have a system after a disaster. If you can’t do that, then security, performance, features, none of that matters.

I have worked in large and small environments, and in all cases, I’ve assumed that as the DBA, I need to be checking that backups are occurring, and that I can restore them in case of any issues. Often this has meant I need to work more closely with others that have the actual responsibility for performing backups. This week, I’m wondering how many of you work in similar situations.

Who is responsible for backups in your company?

Is it the sysadmin of each particular application? Does the DBA ensure database backups while the Exchange administrator handles mail backups? Do you have a centralized system for backups? If backups fail, who’s going to get yelled at? Or perhaps more importantly, who will notice that backups have failed?

There are any number of ways to handle backups, and honestly, the best way I’ve seen had a centralized person responsible for running backups every day and checking on automated tasks, but the individual system owners (DBAs, Exchange admins, application managers) checking that backups had been made. These individuals also test restores periodically. In this way there was always someone to double check the person responsible.

Let us know this week how things work in your environment.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

How Often Do I Backup my Log?

Do you know how often to back up your transaction log in SQL Server? Most of us have stock answers, hopefully answers that have time intervals like every hour. I worry that most people have a time interval of “never” because their answer is “I make full backups and don’t need to backup logs” or “What’s a transaction log?” That’s a different discussion and if those are your answers, I have an article for you to read.

Managing a transaction log is a bit tricky and not straightforward. I think far too many people manage their logs based on the space they are trying to maintain on disk. However that’s not the way you decide when to back up logs. The way you decide how to back up a log has nothing to do with space. It’s best stated in this quote from Gail Shaw: “the log backup interval should be less than the allowable data loss for the system.”

Simply put, decide how much data you can afford to lose. Or how much loss will exceed your tolerance for being yelled at. That’s your log backup interval. Make a log backup after that much time passes. Schedule that interval into your maintenance operations and observe how big the log backups are. That will determine your log size, not the other way around.

As with most guidance and advice, this is based on a time and place in the platform’s lifecycle. This is the advice I’ve used from SQL Server 7 to 2012, but it’s subject to change, so make sure it’s still valid if you’re reading this in 2020.

And, of course, make sure that you also understand how to restore these backups in case you do have an issue. That’s probably the most important thing to know.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and Mevio . feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

A Full Backup Includes Everything (with a caveat)

Full database backups in SQL Server include all of the data, objects, tables, rows, functions, stored procedures, etc. If something is in the database when the data reading portion of the backup concludes, it’s in there.

Note that “in there” means committed in a transaction.

If someone tells you the backup missed a row, or a procedure, or something else, they’re almost always wrong. 99.9999% of the time they are wrong, and you should stand by that.

The issue is that things must be committed in the backup. If they aren’t committed, they aren’t included. And that means that there’s a small chance that something added to the database while the backup is running isn’t in the backup.

If you remember how backups work, there’s a data reading portion of the backup and a log writing portion of the backup. The log writing portion of the backup takes a physical amount of time. If someone were to finish a transaction during this time, the data or objects would not be in a restored database. This is because the transaction didn’t exist or wasn’t committed when the data reading portion of the backup completed.

If the data writing portion of the backup takes a few minutes, and a change was in the last minute or so of the process, someone might think something completed before the timestamp on the backup file is included. It wouldn’t be.

It’s a small chance, and it’s not likely to come down to this point, but it could happen. Can you figure out the transactionally consistent time of the backup? Perhaps, but I don’t know how. You’d need to get the time for the last LSN written in the backup and map that to a time. If you know how to do that, let me know.

Make sure that your backups are transactionally consistent. Don’t export, don’t use open file managers, don’t use anything that doesn’t respect transactions. The native SQL Server backup process does this. If you want a few other features, my employer makes SQL Backup Pro, which also respects transactions.

Accidently Kicking a Database into the Restoring State

I learn new things all the time. This was one that actually stunned me. Huge props to Gail Shaw for posting a note about this in a thread.

Run this code:

CREATE DATABASE MyRestoreTest ; GO USE MyRestoreTest go BACKUP DATABASE MyRestoreTest TO DISK ='myrestoretest.bak'; GO CREATE TABLE mytable( id INT) ; GO USE master go BACKUP LOG myrestoretest TO DISK = 'myrestoretest_log.trn' WITH norecovery

You’ll see this in your Object Explorer



I haven’t started a restore. I’ve run a backup. Apparently this causes problems, as noted by Gail in the thread. Sure enough, it’s documented in the BACKUP command, in the Log-specific Options.

I had never scheduled backups with this type of option, but you might have a job that does this if you were preparing for a failover. Having a script ready it a good idea, but if it executes unexpectedly, this could happen.

The lesson: make sure you know the options when you run a command. Always test, and if something strange happens, search or ask what might have happened.

Backups and Consistency

I wrote about transactional consistency recently. Why do we need this?

Imagine that I have an orders table and an ordersdetail table in my database. The orders table has the order header (date, customer, etc) and the detail table includes information about the items purchased (product, qty, price). Let’s say my orders table has 10 orders and my detail table has 5 items for each order for a total of 50 rows in the detail table.

If I want a backup that is consistent, I need to have all of the orders and details included in the backup. I don’t want partial orders, so all my inserts for orders and details are included in a transaction. That means my backup needs to contain a complete, new order, or no new order at all. Anything else wouldn’t be consistent.

The backup process you choose needs to account for this. It cannot include partial orders from a transaction inside of it. Could that happen?

Sure, and let’s see how. Backups proceed linearly and it takes time for them to complete. Imagine that I have a process that starts backing up my Orders table. It reads orders 1 and 2.

While this is happening, someone inserts a new order, #6, into the system. If my backup system is reading pages, it might continue through orders 3-6 and then start on details. Imagine now the user rolls back the new order. When my backup completes, it will include (and restore) an order #6. That could be a problem.

The other problem could be that the backup process gets to the details table and starts backing up the first 5 details from order#1. While this is happening, a user inserts a new order and order details in a transaction (order #7), and it completes. The backup process moves through the details table and gets all the details, including the new detail records inserted during the backup. However the orders table has already been backed up. The restore will include details for order #7, but not the header (Orders table).

That’s bad.

Don’t use file backups, no open file managers, no exports, no BACPACs as backups for your data. You might have serious data issues on restore, which get worse as your workload is busier on the system.

Make sure that your backups are transactionally consistent. Don’t export, don’t use open file managers, don’t use anything that doesn’t respect transactions. The native SQL Server backup process does this. If you want a few other features, my employer makes SQL Backup Pro, which also respects transactions.

How SQL Server Full Backups Work

I’m writing this in support of a few talks I give that talk about backups. This is how I see things and it’s based on things I’ve learned over time. Some of this comes from a great explanation on Technet from Paul Randal, but there are misc other sources that I can’t be sure of which ones I’ve used. Long story short: I learned most of this from others and docs. It’s not all me.

There are two parts to a full backup in SQL Server:

  • data reading
  • log writing

Technically both sections read and write, but this is how I think of things. I’ll describe there:

Data reading – The backup process goes through all extents allocated and reads a page, sends it to a buffer and that gets written to the backup file. This happens as fast as SQL can do it, just going through the pages, but it takes most of the backup time.

Log writing – all of the log records that are written while the data reading portion of the backup is running are appended to the backup file, after the data pages.

Let’s say that the entire time of the backup, from the File Created to the File Last Modified timestamps on the backup file is represented by t. The data reading portion of the backup takes time d. The log writing portion of the backup takes time l.

This gives us:

t = d + l

My database is transactionally consistent at backup start time + d, not + t. When is that? I’m not sure, but usually it’s fairly close to the timestamp at the and of t.

Make sure that your backups are transactionally consistent. Don’t export, don’t use open file managers, don’t use anything that doesn’t respect transactions. The native SQL Server backup process does this. If you want a few other features, my employer makes SQL Backup Pro, which also respects transactions.