Architecture for Auditing

In the analog world, when we need to provide an auditing system for checking on physical activity, we usually separate out the roles of people that might record or track activity from those that perform activities. The better separation we have between individuals, the better we can usually ensure our auditing records are intact, complete, and trustworthy. We certainly have issues in the real world achieving these goals, but overall things work well.

In our digital systems, however, auditing seems to be bolted on to the systems and not designed with the need to separate duties and protect our auditing records from tampering. I would have hoped that many of the software platforms (Oses, RDBMSs, etc) over the years would have recognized this (along with security) and enabled auditing as a function separate from Administrator/root/sa/etc, but it hasn’t happened. Far too often, we’ve operated under the assumption that administrators can, and should, be trusted.

In SQL Server, most all of the auditing mechanisms are enabled, managed, and reviewed by administrators. While viewing is certainly helpful, and perhaps necessary, for good administration, there ought to be a way to enable auditing that doesn’t require DBAs to manage it. There ought to be some role for auditors, one that allows an independent individual (or group), to access the record of actions taken on the instance.

Perhaps it’s not as big a concern for the world as I think it is. In most cases, auditing data exists to help solve problems, not to ensure a legal record of activity for sensitive actions. If that’s the case, then SQL Server provides lots of choices and options for the auditing of your system. Extended Events, SQL Audit, and more are valuable tools for the DBA. I’m just not sure their value extends beyond being useful tools.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Native Audits

As our databases contain more and more information, it is becoming increasingly more common to implement some sort of auditing feature to ensure that our data is not being inappropriately accessed or altered. It’s not only the management of our organizations, but also regulatory rules are becoming more numerous, and some may think onerous, which increases the burden on the DBA.

There are numerous products available to help here, but SQL Server has included it’s own internal features since SQL Server 2008. However, it seems that I encounter many people that are unaware of the SQL Server Audit feature, and indeen, may not be aware of the Extended Events system on which it’s based. This week I wanted to ask if you are using this feature.

Are you using SQL Server’s native Server Audit and Database Audit features?

If you can share and reasons or details on how you are using the audits, or how easy it is to administer them, it would be interesting. I think we do need to increase the level of auditing on our systems at least to ensure that we are not experiencing any inappropriate data access. At a minimum, we should be monitoring for privileged level access or account changes.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

Can Auditing Fail?

I saw someone’s list of requirements for a SQL Server auditing system recently that included many of the standard items most of us would want. DDL changes should be captured, along with configuration alterations. The performance impact should be low, and all versions of SQL Server must be supported. However, there was one requirement that surprised me. This person noted that the auditing must be asynchronous and the application should not be dependent on the auditing. If auditing failed, the application shouldn’t have any of its transactions limited or rolled back.

I’m sure there are use cases where this is important, and where the auditing might not be critical, but the auditing data is available for informational purposes to troubleshoot any issues that relate to the database. However in many places where auditors review information, or the data is part of a legal record, the auditing cannot fail. If it does, then the application needs to stop working. If an audit is truly an audit of activity, then all activity must be audited.

The C2 audit mode in SQL Server has been deprecated. Probably for multiple reasons, but It did seem that the idea that a failure in auditing could stop the database wasn’t a setting many people were willing to implement. To me, this means that auditing isn’t as important as having the system continue to process data. If that’s the case, then is auditing that important?

I know auditing data can be overwhelming. I know that the management of audit data, including archival, is complex. I also know that most of the data isn’t very useful and will never be examined. However when we need audit data, we really, really, need accurate audit data. I really wish that Microsoft would integrate auditing better into SQL Server to ensure the data can be easily managed, compressed, and archived in an automated fashion.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn.

QA for Data

Here’s a typical DBA scenario. You’re at your desk, working on something, when you get a trouble ticket, phone call, or tap on your shoulder. There’s a problem in production and an order, or some other important piece of data, is incorrect. Can you fix it?

Many of us are happy to oblige. Often we know what needs to be changed and can write the INSERT/UPDATE/DELETE statement to correct the issue. Even if we have to dig in further, we can usually find the data in a table and determine what should be changed.

However, what do we do then?

If you are asked the make data changes to production, do you QA the script? Do you save the scripts in a VCS or a log? Does anyone else examine the data or code? Do you scan the script yourself, have SQL Server parse it, or bother to look for triggers that might fire? Do you bother to put the script in a transaction with error handling?

I have to say that in most of my career, I didn’t bother with any of that. I’d examine a script, perhaps have another DBA or developer look at it, but most of the time I’d run it in production if I thought it was ready. I did often wrap the code in a transaction, with a query to check the results, and an immediate rollback to prevent blocking. However if I thought the results looked good, I’d rerun the query without the transaction. Most of the time that worked well. A few triggers and fat fingers caused issues that I had to fix again, sometimes after quite a bit of work to correct my mistakes.

It’s hard to prevent data change errors, especially if it’s something we do often, or we work in a small environment with limited staff. Ultimately, we might not even care because if we update the data incorrectly, we can run another update to fix the new issue. That’s not necessarily what most managers would want to hear, but it’s reality.

The one thing I have insisted upon in my environments was logging any queries run. A simple cut and paste with a timestamp. That way if things were done incorrectly, at least I know what was done.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn.

Quick Schema Auditing

I was working on a demo recently and needed to show that a little monitoring can help you catch schema changes. At first I looked at SQL Audit and DDL Triggers, but then I ran across a short custom metric on the SQL Monitor Metrics site that my company, Red Gate Software, put up to help people share their custom monitoring metrics and alerts.

The metric is called Schema Modified, and it uses a really simple query. This is all is does:

SELECT DATEDIFF(ss, '1970', MAX([modify_date]))
 FROM [sys].[objects];

It calls this query every minute for each database on which you have it enabled. This gives you a count of the number of seconds between 1970 and the latest schema modification in your database.

Now that’s not terribly useful, but if you look for changes in this metric, then it becomes interesting. For example, in one of my tests, I got this value


If I continued to run the query,  the same value was returned if nothing changed in the database. However once I added a new object, then the value changed to


That’s an increase, and my alerting was looking for changes in the value, so when this new count of seconds appeared, an alert was raised.

Using the Information

What good does it do you to know that something changed? Admittedly, this may or may not be useful. This doesn’t tell you what changed, and certainly help you determine who changed things.

However, in more than a few of my development jobs, we knew people would change things. That wasn’t the issue. Really we wanted to know that something changed, and if so, we would investigate further. Often we could easily determine who made the change, based on what it was.

This is really a trigger more for something like production, where I don’t expect changes, except when I deploy things. Any other change is cause for concern, and I might have alerts set to ping people when there’s a change. If we’re making the change, then we ignore the alert, because we’re aware of it.

If we aren’t deploying changes, then we start investigating immediately.

Disconnecting Auditing

We know security is becoming more and more of a topic for IT professionals. As we realy more heavily on our computer systems, we have to be sure that the information contained in them is secure. We know that we can’t necessarily anticipate and protect the applications from every attack, but we can usually detect and respond to incidents. To do that, we need good auditing of all the events that occur.

The problem, in my mind, is that our auditing efforts and implementations are too tightly tied to the administration of our systems. The auditing features must be configured by administrators, who are also often tasked with the review of the auditing data and logs. This is a fundamental problem as it’s entirely possible that an administrator or privileged user might be just the person that will violate security practices. With their rights inside of the computer system, it’s likely that the same person perpetrating the malicious activity would be able to easily cover up or remove any evidence of the incident.

I think that auditing is fundamentally implemented poorly. Auditing features in software, including SQL Server, should be separated out from administration, perhaps even configured and enabled by a separate user or account than the person who administers the system. I would anticipate that a person in the finance or accounting departments at most companies might be responsible for managing the audit data. Even if they were unsure of the meaning of the data, having control over the information would prevent problems with the auditing data being compromised. I could even see auditing services being offered by third parties that interpret or review the data for companies without a dedicated security department.

I doubt we’ll see a disconnect anytime soon, but I do think that the value of auditing is drastically reduced when we don’t have a strong separation of rights, responsibility, and capabilities between auditing and administration.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

Auditing Matters

There’s been some reporting about Yahoo Mail getting hacked and potential issues recently. I ran across this piece that talks about what happened and it’s an interesting read, but there’s one quote that stuck with me: “Yahoo reported on its Tumblr that it had detected “a coordinated effort”—basically, an attack—by somebody trying to gain access to user accounts.”

It’s not the notification or the proactive resetting of accounts, but the detection that resonates with me. We’ll never stop all hack attacks. We’ll never plug all the holes in software. We’ll never anticipate the ways in which our systems might be compromised, but we can detect issues. I think more often than not we can deal with any hacks or attacksif we are aware they took place.

We have some login auditing in SQL  Server, an auditing and eventing framework, and the ability to capture and store this data. However we don’t have good proactive tools to help us detect issues. I’d like to see enhanced tooling to allow us to review log data, write alerts that look for patterns, and more. While much of this can be built by DBAs, it requires use to develop and maintain software, and even then it’s easy to miss potential attack vectors if you don’t constantly supplement your knowledge and enhance your tools.

Given what we have to work with, I’d encourage you to learn a bit about the different frameworks and gain some basic skills with the tools. I’d also encourage you to think about writing queries to look for potential hacking issues, like updating all of your lookup values to the same string, or embedding script tags in your data. I’d encourage you to write or speak about what you learn, and how you use the information. If you’d like to write software to make the task easier, that would be great.

I’d also encourage you to befriend your network or security administrators and teach them how to query their own lo data. Lots of their tools collect data, but provide poor query tools for the information. Perhaps you can even build them a data warehouse that allows them to tighten security by examining their own data.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and Mevio . feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

The Future of Auditing

I was reading Captain’s Share the other day and enjoying a quiet afternoon at home. It’s a science fiction book about one man’s journey in the future as a captain of a space freighter. It’s an interesting series from Nathan Lowell that I’ve enjoyed and recommended to other science fiction fans. In the book, there’s a scene where the main character is leaving his old ship as first mate and moving to a new ship as the captain. However he notes that the formal process is to deactivate his records on the old ship and ensuring they will be read only forever. The book notes they can’t be deleted because they are a part of the ship’s records, log entries, etc.

That seems to be a far cry from the way auditing takes place in current computer systems. Auditing of systems is under the control of the sysadmins (who are sometimes hackers) and can be altered, changed, etc. We, as software designers, haven’t done a good job of ensuring the integrity and longevity of log records. In some sense, it seems to be a fundamental flaw in OS and software design to not have separated out the auditing and recording of actions from the administration and rights of the rest of the system.

I’d hope that we would recognize that auditing actions and preserving this data is something that ought to be tightly linked to, but separate from, the rest of system operation. I’d like to think that fundamental changes and actions taken on the system should be written separately to an area that is easily marked as readable by non-sysadmins that are designated to review the information. I know we have the challenges of managing the space and the problems of spurious actions being generated to fill (or rollover) logs, but I’d think after 50+ years of computing we would have considered some sort of event log that isn’t under the control of the people whose actions it is recording.

SQL Server has improved its auditing features and capabilities, but far too much is still linked invariably to the sysadmin, often the same person the auditing should be watching. This is certainly one area that I hope matures in future versions as the need grows to track and review actions taken by privileged accounts.

Steve Jones

Video and Audio versions

Today’s podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

Follow Steve Jones on Twitter to find links and database related items and announcements.
Steve Jones Windows Media Video ( 19.5MB) feed

MP4 iPod Video ( 23.7MB) feed

MP3 Audio ( 4.6MB) feed

Feeds are available at iTunes and Mevio

To submit an article, rant or editorial,
log in to the Contribution Center

T-SQL Tuesday #46–The Rube Goldberg Machine

tsqltuesdayIt’s T-SQL Tuesday time again. This is the monthly blog party where we all write on a topic. This month the topic is the Rube Goldberg Machine, as it might relate to SQL Server. You can read the invitation linked above from Rick Krueger and write your own post if you choose.

Just be sure you post it on Tuesday, Sept 6, 2013.

Catching the Cheaters

A long, long time ago, in a county just to the west of here, I worked for a small company as a DBA. I had to support our production operations, as well as a team of 10-12 developers that were building our main company application. Since we were an ecommerce type company, this was an important application for the company.

We were working in an high incremental way, probably similar to an Agile methodology, and deploying changes every week. However when I started the deployments weren’t smooth. Our developers were handling them, often bumbling them a bit, but able to fix things in an hour since they had done the development.

Not a good separation of duties and it was becoming more of an issue as our boss wanted minutes of downtime, preferably single digits. I tracked down the main cause as a haphazard development style where changes were made to databases, instances, and IIS without much tracking going on. The IIS changes were easy: the developers had their rights removed from production and even test machines for configuration changes. However the database required a bit more work.

As this was SQL Server 2000, we had limited ability to audit and manage rights. We also required our developers to have rights to the development machines in order to create test databases and work on importing data. We also wanted to ensure that developers moved quickly, but I needed a way to keep control of the instance.

The first contraption I used was a simple one. I loaded the output of sp_configure into a table on the instance. I had created an administrative database that I used to track backups, and I added this information in there. I then created a job that would run sp_configure every day, compare the values to my table, and then alert me to changes. It would also update the table with the new, current values.

This allowed me to catch various changes that developers were making when they tinkered with the server to “make it run faster”. I didn’t prevent changes, and if their changes worked, we’d deploy them to test (and eventually production), but this allowed us to document them and be aware.

This worked well enough that I build a few more mousetraps. Catching schema changes is hard, as there isn’t good auditing in SQL Server 2000. However there is a “version” number for each object that is incremented when it’s changed. I built a similar audit/job system for our sysobjects table, but ran this every hour, catching changes. When I was alerted, which was more days than not, I’d email the team, track down the person changing things, and make a note for our current branch of work.

This worked great in that development wasn’t slowed, but I was able to account for all development changes and slot them into the current, or future, development deployments. In a few months we reduced our deployment time from about an hour every Wed night to less than 5 minutes.

Sometimes the mousetraps actually help the mice work better.

T-SQL Tuesday #45–Follow the Yellow Brick Road

tsqltuesdayIt’s T-SQL Tuesday time again, and this month Mickey Stuewe hosts the party. It’s an interesting topic, and one that I think is important, and will be more important, as we capture, store, analyze, and depend on more and more data.

The topic this month is auditing, and you can read the invitation on Mickey’s blog. I think it’s amazing no one has asked us to write on this in the past, and I expect we’ll see some interesting stories that people will share.

This is a monthly blog party and all you have to do to participate is write a post on the 2nd Tuesday of the month. The topic comes out about a week before, and you can schedule something to go out on the appropriate day. Be sure you leave a comment on the host blog or a pingback.


I’ve never been bound by regulatory requirements for strict auditing. When I worked in industries that had bounds, they weren’t really updated for computer systems, and lots of our practices slipped by because no one thought to look. A few times that was scary.

However I’ve implemented auditing in various systems, in various ways. The most common way to do this in the past was with triggers, though I had a situation where I had to build a unique solution that would capture changes being made to systems.

I worked for a small company one time and we had a few critical systems our clients regularly accessed remotely for various functions. This was in the era of client server systems, and we delivered software to our clients that connected to our servers. We had lots of buggy software, and when we released new features, I’d cringe as I knew we would have calls and complaints for a few weeks as we patched things.

I ran a team of 3 DBAs, and we had 5 or 6 developers working for another manager, including a few remote developers. We had agreed to limit changes to known times, and document the packages being deployed so that we were aware of changes and could better troubleshoot issues. Our environment was complex enough with multiple processes communicating with each other and SQL Server without adding the issues of unknown changes. We communicated things to our development staff, and though they grumbled everyone agreed.

One day I came in to find that a process that had worked the day before had failed. We were struggling to get it working, and eventually discovered that a schema had been changed. This was in response to another enhancement, but it caused an import process to change. We hadn’t scheduled a change, which must have occurred overnight. No one admitted to making a change, and after a long day, I had to drop it.

But not completely.

I went back and set up a server side trace to run and capture information for object changes. I let this run constantly, rolling over the files periodically. It was a bit of a load on our system, but I felt it would be short term and it was important enough to understand what was happening.

A few days went back and we were having an issue with another part of the system. Clients were screaming at my boss, who was coming to the development area to complain to DBAs and developers alike. As we tried to reproduce the problem on test systems, suddenly it started working.

Relieved, but concerned, I decided to check my trace. Sure enough, I found a change to an object logged, by none other than that craft “sa”. Digging in further, we discovered that a piece of middleware had a timestamp of a few minutes before the system started working.

It can be hard to track down who is using an “sa” account, which is why it is not recommended as an account to use by individuals. Cross referencing some information from the network team, we discovered our remote developers were slinging code changes on the system whenever they felt like it, using sa and “Administration” on Windows.

They patched systems at times, but often these were patches to fix issues in poorly written software they’d deployed in the first place. They were also patching patches when thing didn’t work. Whether this was in reducing the time of problems or not was difficult to determine, but it certainly meant a lot of wasted time when we tried to troubleshoot issues with no idea what had changed. We also found lots of their changes weren’t being added to version control.

My boss agreed this was an issue, and it allowed me to change all system administrator passwords, as well as revoke all rights to production systems from developers. Eventually we achieved more stability in our systems than the company had ever found. I’d attribute this to preventing cowboy coding changes to systems, which seem to cause as many issues as they fix.

Auditing is something I’ve depended on, especially for myself. As I’ve worked with more and more systems concurrently, I find that it’s easy to forget about changes made in a hurry, or forget which change occurred on which systems. Logging and auditing allows me to retrace my own steps, as well as those of others.