Skip to content

Can Auditing Fail?

I saw someone’s list of requirements for a SQL Server auditing system recently that included many of the standard items most of us would want. DDL changes should be captured, along with configuration alterations. The performance impact should be low, and all versions of SQL Server must be supported. However, there was one requirement that surprised me. This person noted that the auditing must be asynchronous and the application should not be dependent on the auditing. If auditing failed, the application shouldn’t have any of its transactions limited or rolled back.

I’m sure there are use cases where this is important, and where the auditing might not be critical, but the auditing data is available for informational purposes to troubleshoot any issues that relate to the database. However in many places where auditors review information, or the data is part of a legal record, the auditing cannot fail. If it does, then the application needs to stop working. If an audit is truly an audit of activity, then all activity must be audited.

The C2 audit mode in SQL Server has been deprecated. Probably for multiple reasons, but It did seem that the idea that a failure in auditing could stop the database wasn’t a setting many people were willing to implement. To me, this means that auditing isn’t as important as having the system continue to process data. If that’s the case, then is auditing that important?

I know auditing data can be overwhelming. I know that the management of audit data, including archival, is complex. I also know that most of the data isn’t very useful and will never be examined. However when we need audit data, we really, really, need accurate audit data. I really wish that Microsoft would integrate auditing better into SQL Server to ensure the data can be easily managed, compressed, and archived in an automated fashion.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn.

Complete Definitions

I saw a script from a user recently that looked like this:

    Create db x on file =
    go
    alter db set compatbility = 120
    go
    alter db set containment = none
    go
    ...
    alter db set service_broker off
    go

That’s interesting because most of the code I see looks more like this

    create database x;
    go

That’s it. At first I thought this was overkill, but then I wondered. If we are actually creating a new database for an application, shouldn’t we set the options the way that our application needs them rather than rely on defaults? I know that often many of us work with defaults on our instances and databases, but should we expect all configuration options to just be at their defaults? I know I certainly have been burned in the past with default changes.

I’ve seen similar scripts for code, with SET ANSI_NULLS and other settings in the script. While I sometimes think that code gets in the way, I know that the script contains the settings that work for the code I’ve written and tested. If I remove those settings, it’s entirely possible that my code might not work. Do you know all the ways in which the various SET parameters for code affect your code? I’m sure that many of us don’t think about these changes when we’re writing code, assuming that the dev, test, and production servers will be the same as our database. In most cases they are, but not always.

There’s a trend towards explicitly declaring the settings for an environment and then letting the system ensure that it maintains those settings when it’s built, or perhaps every so often. If that’s the case, and someone makes a mistake, say on ANSI_PADDING, do we really want to assume none of our code was compiled with those settings? I’m not sure I do, and I suspect that we should be explicitly putting all our settings at the top of scripts to ensure code behaves as we expect.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn.

VMware and DBAs

Disclosure: I’m a VMware fan and run it on my laptop. I definitely prefer it to other hypervisors. This workshop and all travel expenses was also paid for entirely by VMware.

I was fortunate and grateful to be invited out to VMware recently for a multi-day workshop for SQL Server on VMware. Michael Corey wrote about the workshop here, which was an interesting way for VMware to reach out to the SQL Community and both talk to us about what they’re doing as well as get feedback from us on how SQL Server performs and interacts with the VMware platform. They will do more in the future, so if you’re active in the community or a heavy user, read Michael’s post and see if you can get yourself nominated.

The week was full of speakers from both VMware and Tintri, a storage vendor doing interesting things. We had quite a few executives talking about their plans to make SQL Server run better, and we (as a group) provided lots of feedback. Certainly I think we helped to ensure executives understand that a database server is fundamentally different from a file server, mail server, or other workload.

Of course, it wasn’t all work.

Mrdenney

I can’t talk about lots of what we discussed, as much is under NDA, but I will say that you should keep an eye on how VMware is going to improve their interaction with SQL Server. Hopefully we’ll see more products, information, and help at some of the SQL Saturdays and other SQL Server specific events.

We also got a night out at ATT Park in San Francisco where the various attendees and speakers could interact in a casual atmosphere. An exciting game, with two of the best pitchers in the game throwing that night (Kershaw and Bumgarner).

Photo Apr 22, 8 19 09 PM

I’d never been there, so it was a treat for me. I walked around and enjoyed the baseball game, the first one I’ve seen live in a couple years.

Photo Apr 22, 8 26 19 PM

Overall it was a very interesting time, with lots of points raised by everyone that I hadn’t considered or thought of. The more I hear about what the ESX platform does and handles, as well as hearing about how people have set up the platform. There are truly some powerful instances being run in a virtualized environment and I’m not sure there are many workloads that couldn’t be run successfully on VMware.

I’m sure that there are hardware and budget restrictions for many people, but the issues aren’t VMware or virtualization, they’re the setup. In the labs we worked on, quite a few of the systems pressed a SQL Server instance hard, and the hypervisor and storage system kept up nicely.

Photo Apr 22, 10 27 32 AM

Not to disparage Hyper-V, but I haven’t had the experience there. I suspect both hypervisors could be tuned to a high level. Both also have some holes and places where the system might not run as smoothly as you’d like, but at least I think VMware is more aware of what we, as SQL Server professionals, see as problems.

Tintri

I have to admit that I have mostly though of storage as a utility. It needs to work and respond fast, but beyond that, I don’t care about it. It’s like a water faucet. I turn it on and it works.

Tintri was the guest storage vendor that provided appliances for us to use. I didn’t’ think much of them on Tuesday morning, but by the time their founder had talked with us, I was very intrigued. I still don’t really care about storage other than needing it to work, but I was very intrigued by how this particular product works.

Most storage is presented as a LUN to a host, which may or may not share that among guests running on a box. Tintri changes that, with what they call VM aware storage, with VM level QOS. Essentially, the storage box is aware of each guest connected as a VM. It manages the storage response and bandwidth on a VM basis, with separate FIFO queues for each drive in the VM. Essentially each VM is treated separately and the storage capabilities (min and max IOPS) can be managed separately.

It seems as though VMs get better response from the appliance this way, with less dependency on heavy management by storage admins. They also use a lot of flash memory (SSDs) with disks to ensure fast responses. They do 100% write to flash for speed and tune the systems to aim for 99% reads from flash. Because the device is aware of VMs, it can move blocks around from flash to disk to ensure that the heavily used data is quickly available.

If you have heavy needs, check them out. If you’re looking for a SAN, I’d look at them as well. Not sure what the pricing is, but I bet it’s competitive with other SAN devices.

We also got a lecture about low level storage technologies from a VMware exec. The talk looked at the advances taking place in storage, which seem to be finally leaping forward. With the price of SSDs crashing, and the research into 3D flash, it seems that though more and more of us might move to flash based SAN storage quicker than we expect. Fascinating stuff, and I suspect that database systems will start to see better performance from hardware upgrades across the next 6-7 years.

A Break

It was nice to get away and learn something without any pressure to work on much else. This was a short workshop, close to home, and in an area that had me concentrating and paying attention in ways that sometimes don’t happen at SQL Server events. I enjoyed it, and I learned quite a few things.

Photo Apr 22, 10 27 27 AM

I’m not sure how much I’ll use this stuff moving forward, but I am thinking that I’d like to play around with an ESX server at home and use that to experiment with SQL Server features and especially setup. I’m looking forward to trying to get a system ready that can build a new SQL Server instance in minutes.

Finding Problem Code

Performance of a SQL Server seems to be one of the main concerns for most developers and DBAs. We are under pressure to constantly squeeze more performance out of our applications and reduce the time it takes for users to interact with systems. Reports can be especially problematic as users don’t want to wait for results, but certainly data entry tasks can’t be impacted either.

Whether or not we can actually make changes to the system, or even have time to bother might be irrelevant. I’m sure there are plenty of databases that some of us don’t even bother to try to tune because we’re busy elsewhere. Likely writing more code for additional features or reports that have been requested. As a result, we may pile up lots of code that isn’t necessarily run often. However when that code is executed, we’ll receive no shortage of criticism if the code doesn’t perform well.

This week I wanted to know how you might go about finding the code that is problematic if there isn’t a complaint. If you do have time to pro-actively tune your system, what are the techniques you use to examine a system. I suspect the more advanced people will have answers, and I hope they share them as there’s no shortage of readers that may struggle to improve the impressions of their database server.

What’s the best way to find the SQL statements you need to tune?

It’s a simple question, but imagine that you know there are complaints, but don’t know what is the best way to focus your time. You, as a new or accidental DBA, want to improve the system, but where do you start? What code is the problem? Is the report that your boss complains about that’s the issue or are there other queries that slow down the report execution?

Share the way you’d tackle a system when you don’t have a specific query in mind to tune.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

April Blogger Challenge 4–Filtered Index Limitations

The April Blogger Challenge is from Ed Leighton-Dick and aimed at new bloggers, but anyone is welcome. I’m trying to motivate and cheer people on.

Filtered Index Limits

I ran across a limitation recently with filtered indexes and was surprised. Apparently, you can’t make complex filters in your definitions.

In the last post, I wrote about creating a filtered index on gender, looking for non-null columns in a table. That code looked like this:

CREATE INDEX Users_Gender_Filtered
  ON Users (gender)
  WHERE gender IS NOT NULL;

However, what if I find that I have lots of NULL values, but also lots of spaces, ‘ ‘, in my table because this is an optional field, and the data entry code changed at some point. Neither of these rows actually helps me in finding the rows with an ‘M’ or a ‘F’.

I tried creating an index like this:

CREATE INDEX Users_Gender_Filtered2
 ON Users (gender)
  WHERE gender IS NOT NULL OR gender = ' ';

That didn’t work, so I tried parenthesis.

CREATE INDEX Users_Gender_Filtered2
 ON Users (gender)
  WHERE (gender IS NOT NULL OR gender = ' ');

That didn’t work either. No matter how I moved parens around, they didn’t allow a complex (if two criteria are “complex”) filtered index.

This is a limitation of a filtered index right now. I’m not sure if it will be changed, though I would like it to be. There are other limitations, so read the docs carefully, and think about what might work well in your environment.

References

A few places I used to research this post.

Quick and Easy Blogging

This post occurred to me while I was writing some code. I mocked up a table in about 2 minutes, and then ran a quick search on the Internet. Reading a few links was about 10 minutes and then testing the code (including dropping the table and recreating it a few times) was less than 5 minutes. All told, I solidified some knowledge and completed this in about 20 minutes. I also have drafts and ideas from this post for 2 other posts that cover this same topic in a similar way.

Look for the other posts in the April challenge.

Is It Time to Upgrade?

There was a post from the SQL Server Data Platform Advisor the other day that noted it’s been almost 10 years since SQL Server 2005 was released and it’s time to upgrade. At least, if you want to remain in support. Microsoft Support notes that support ends on Apr 12, 2016, which is just under a year away. While that might seem like a long time, it can take months to get an upgrade approved, planned, and executed, so if you think you’ll want to upgrade before support ends, you definitely should start planning.

However, do you need to upgrade?

That’s an interesting question. I would assume that if you have SQL Server 2005 in your environment, it’s been running since sometime before the end of 2009. It’s possible you installed 2005 after SQL Server 2008 was released, using the downgrade rights you have. I expect that you might have done that for a year, but I’d hope that if you were installing new instances after 2010, that you were using 2008 or 2008 R2.

Given that assumption, I’m guessing that you’re relatively pleased with how SQL Server 2005 has been performing. At least I’d think you are. If you’re not, then why haven’t you upgraded? If it’s budget, then does support ending really make a difference? If whoever approves purchases has been satisfied with performance, why upgrade now? If your database backs a third party application, will you have to upgrade that code as well? If so, another reason it might not be worth making a change.

We haven’t really seen patches for SQL Server 2005 in a long time. There have been two hotfixes and one security update since the last Cumulative Update, and the last one was in 2012. While I do think it’s possible that more security bugs might be found, the likelihood is low. Chances are that if you’ve been secure this long, you’ll still be secure for a few more years.

If you are continuing to develop your code, however, there are lots of improvements in later versions of SQL Server that you might take advantage of. T-SQL has continued to grow and progress, with new features. Replication has improved, as have some of the manageability of the servers. Certainly security and encryption have advanced, so those could be reasons to consider moving to a more current version. The wealth of information available for tuning your system, as well as more efficient code and a better query optimizer might lead you to consider both a hardware and software upgrade.

While there are some compelling reasons to upgrade with greatly improved performance and many more features that can help improve development, if you have systems that are working then is it really worth the licensing cost? Especially now that SQL Server is licensed by the core and not socket. Working is a feature, and one that I do think is worth considering as part of your decision.

Steve Jones

 

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Altering a column to add NOT NULL

I had to do this recently and needed to look up the syntax as I couldn’t remember it. I don’t do this often, but I was working on a particular item and wanted to test this.

I had a table, something like this:

CREATE TABLE SimpleTable ( MyID INT NOT NULL PRIMARY KEY , MyChar VARCHAR(200) , [Status] TINYINT );

I decide that the [Status] column shouldn’t be NULL and decide to ensure it’s always got a value. I don’t want to rebuild the table, I want to ALTER it. I use this syntax:

ALTER TABLE dbo.SimpleTable ALTER COLUMN Status TINYINT NOT NULL;

That’s it. I do this and the column now no longer accepts NULL values.

Of course, if there’s data in there, I need to set those values to something that isn’t NULL before I can do this, but that’s a separate topic.

By the way, I remembered most of this syntax. What I forgot is that I need to include the datatype in there to make this change. There are a number of restrictions with this command, so be sure that you read carefully before you try to make changes. If you do this in SSMS, you might end up with a script for a new table, which may not be what you want to occur in a production environment.

Reference

ALTER TABLE – https://msdn.microsoft.com/en-us/library/ms190273.aspx

Dig Out the Root Cause

Early in my career in technology, I worked as a system administrator on a Novell network. Periodically we’d have crashes of servers on our network, or application failures, and while we understood that sometimes these were caused by transient errors, we often invoked a root cause analysis process when an issue repeated itself. I’m sure that the environment in which we worked, a nuclear power plant, contributed to the idea that we should always understand why some system failed.

I was reminded of this while reading Grant Fritchey’s “Understand the True Source of Problems” in which he relates a few “best practices” that are likely folklore stemming from ignorance rather than actual good ideas. I’ve seen a few of these rules at various places in the past, often implemented because they appeared to work. However, we need to remember this:

Correlation != causation

Just because you perform some action and observe an effect doesn’t mean that one caused the other. If that were the case, I’d never install new hardware or applications on my computer systems as I’ve had crashes during various installations. However I often back out a change, or retry it and realize that I had a coincidental result, rather than a related one. The same thing has occurred in more complex systems where an action appears to cause an issue, but in reality, the two items are unrelated, or loosely related.

We often don’t take the time to determine the root cause of many issues, which is disappointing. While it often doesn’t seem to be a worthwhile use of resources, I bet that often we’d learn there are actions we are taking (or code we’ve written) that actually is the cause. If we learned from our mistakes and could avoid making the same ones again, we’d greatly improve the quality of our technology systems, with many fewer issues over years.

Unfortunately, “good enough” is often good enough, even if it does result in a bit of downtime.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

The May Blogger Challenge–Continuing On

Last month I participated in Ed Leighton-Dick’s April Challenge. My four posts were tagged the April Challenge, and this month I plan to continue, and hopefully get a few more people to up their blogging.

Building on April

To make the next step in starting to blog regularly, I’d like to see people in May continue with their weekly blogging, but do one more thing: write one additional post.

This additional post I’d like to be one of the T-SQL Tuesday topics. Pick one from the list of past choices and write a post. Feel free to title it with the T-SQL Tuesday number and post it. It really doesn’t matter when you wrote it. The whole idea is to just write.

Getting Started in May

If you didn’t participate, or you struggled to get 4 posts, get started in May. Write one post per week and publish it. You can tag it the #SQLNewBlogger or you can just send a note to myself or Ed. We’d like to know what you’re doing.

If you struggle to write, do one of these two things:

  • Make a note, send yourself an email, whatever, when you close a ticket or complete a section of code.
  • Use one of my topics

Either one of these things work. I’d prefer you do the first one and just make a note when you do something. I’m sure you do something at work. Maybe it’s check logs, maybe it’s solve a problem with an instance, maybe it’s write a report. Anytime you do something, make a note.

Then write about that note in a weekly session.

If you are still struggling for topics, here are a few that I’d like to see you tackle in May:

  • Where do you use a unique index?
  • Generate a script after adding a column in the SSMS table designer
  • Query for the list of recent full backups in msdb for a single database
  • Show how to join two tables.

Pick one and write. Set aside a 15 minute session a week to write, and work on a short, < 1 page, post.

Good luck, and hope to see you blogging in May.

Microsoft Exercise

Have we got a deal for you! Microsoft is concerned about the health of your company and is doing something about it.

There’s a trend in modern society of obesity, complacency, and general laziness. Not only does is this bad for society in general, but it also affects your health. So we have a new solution for you, designed specifically to combat this degenerative lifestyle in an easy way.

The Upgrade Treadmill!

Designed by the engineers up in Redmond, this highly engineered device ensures that you won’t sit around getting stagnant on those junky-technologies and deep-fried solutions. Instead we’ve built a system of carefully selected products released on a regular cycle that ensures you’ll be running for the rest of your career.

And it’s easy to get started. Just call 1-800-SHARE-PRICE and enroll now in our assurance program. With a short lifetime and quick expiration, our automated reminders to your boss will ensure that you never slack off and slow down your pace of upgrades.

It’s all in fun, but there’s some seriousness here. I originally wrote this awhile back when it seemed Microsoft was pressing for upgrades to SQL Server 2008 R2 after a relatively short development cycle post SQL Server 2008. However the piece got lost in the shuffle. I found it recently and was reminded of the sentiment when I saw a post to upgrade away from SQL Server 2005 with support ending.

There are good reasons to upgrade; just be sure the reasons are valid for your instance and environment.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.8MB) podcast or subscribe to the feed at iTunes and LibSyn.

Follow

Get every new post delivered to your Inbox.

Join 5,224 other followers