The Push to Upgrade from SQL Server 2005

It was interesting to see the results of my straw poll last week on SQL Server 2005. Quite a few people still run that version, and some aren’t looking to upgrade anytime soon. However when support ends, you should be aware that there is potentially an impact to your organization.

If you work for a government, or you are bound by regulations such as PCI, HIPAA, or something else, you need to upgrade. I would hope that an audit sometime in the last 3-4 years would have alerted you to the issues of running unsupported software, but given the wide variability in auditors in the world, perhaps not. In any case, consider this a warning that you should be sure any databases that contain data regulated by law are upgraded by April 12, 2016. Otherwise your organization will likely be out of compliance with the regulations.

If you haven’t upgraded, and want to avoid an upgrade for the next decade, this is a tough time. I’m not sure that SQL Server 2016 will RTM before support ends, which is the version you’d like to pick. This means that counting on this event is unwise. Even if you were sure SQL Server 2016 will release on April 1, is that really enough time to test your application?

That’s really the bottom line for most of us. We need to test and be sure that an upgrade will, in fact, work for our particular environment. We need extensive testing, which probably means that those of us aiming to upgrade should be aiming for SQL Server 2014 (or SQL Azure Database). Microsoft would like you to choose one of these, as it benefits their bottom line, and they are pushing customers to upgrade. However, that’s somewhat the nature of software. We need to upgrade at times in order to maintain our systems, and vendors won’t support old software indefinitely.

There are a couple months left, and I think you’ll need all that time to be sure your software still runs fine. I’d recommend you use new hardware, a completely separate instance (side by side upgrade) and use Distributed Replay to capture a workload on your existing instance and replay it on the upgraded server to check that your system performs well. To do this, you really need to be sure you are monitoring your current (and the upgraded) server to measure timings and performance.

Monitoring isn’t simple, but it’s not that hard either. You should always be monitoring your system (we use SQL Monitor at SQLServerCentral), as well as keeping baselines, including wait statistics, that you can use to analyze the performance of your database. This is invaluable in determining if your new database will outperform the old (hint, it should).

However you approach your upgrade from SQL Server 2005, be aware that you need to be on SP4. I recommend you use the Upgrade Advisor, perform a side by side upgrade, and don’t be afraid to use the compatibility level to allow the database to continue to behave as a 2005 instance. Above all, be sure you have good backups you’ve tested, just in case things fall apart.

Lastly, SQL Server 2008 and SQL Server 2008 R2 are both out of mainstream support, so don’t forget about future planning for those versions as well. Best of luck with your upgrades, and please let us know if you find specific issues or problems in upgrading. Post a note in the forums to get help, or maybe help someone else complete their own upgrade.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Reasonable Timeframes

Many of us hear about problems with various systems on a regular basis. We report them in Database Weekly far too often, and I’m sure a few of you have been informed (or found) bugs, nagging issues, even vulnerabilities in your systems.

What’s a reasonable time to fix a system after an audit?

It’s a simple question, but it’s not very simple to answer. After all, most of us don’t have a lot of slack development time built in to fix issues. Unless the issue is a broken application that doesn’t work, the items disclosed in an audit need to scheduled in among other work. After all, most of the time the audit finds something that no one is aware of, or no one has wanted to fix. This is work that no one really planned on completing.

I ran across an interesting piece about the Employment Department for the state of Oregon hasn’t fixed a number of issues after an audit last year. While some strides have been made, there are still outstanding issues, the sum total of which it is estimated will take a decade to complete. That’s a long time, but in large systems, especially ones where the entire application cannot be rewritten because of resources, it’s not unusual. I’ve worked in a few places where we had large scale systems that we knew had issues, but we couldn’t easily re-design and implement fixes in any reasonable length of time. Often this was because of downstream dependencies, but certainly culture and management hadn’t made change a priority.

I sympathize with those people dependent on mainframe systems. The power and reach of those systems, the poor documentation, not to mention the complex training required to change clients’ habits is huge. I would hope that the government groups using these large scale systems would work together to jointly proceed on development, with politicians also involved to help standardize the requirements across state lines (or countries’ borders) and simplify the software needed.

However, no one ever makes software simpler, especially when it’s being designed.

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Digital Woes of Public Records

Researcher Chris Vickery, discovered that 191 million voter records are available to the public. The information was in a database on the Internet, which seemed to be a collection of voter records and information compiled from public sources in various US states. While no Social Security numbers were present, names and dates of birth, as well as address and voting scores were in the database.

That’s scary, though potentially not a problem. A number of states publish voter data as public records. A few might have restrictions on the use of that data, but the fact that the data is available means it could be used maliciously, with overburdened authorities unlikely to prosecute anyone even if they’re caught.

This is one of those areas where our understanding and control of data hasn’t caught up to the digital age. It is one thing when public data is available to those that must physically search for it, or even query for singleton records. However data can reveal much more information, or even be used in new ways when large volumes of it is available. Now the ability to access every voter’s name, address, and date of birth could potentially be a problem.

I see so much data that we might have taken for granted in the past, thinking nothing of it’s visibility, being a problem in the future. When someone can gather large amounts of data, and store is cheaply, even accessible in something like a data lake, we may find that public data is problematic. When anyone can start to gather and combine lots of data from different sources, we might find that capability quite scary as potentially lots of information about individuals can be determined. We’ve seen anonymous data sets de-anonymized with the application merge of data from different sources.

I truly hope that we find ways to better protect and ensure privacy in the future, as all the capabilities and power that computing brings to data analysis truly has a dark side.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Culture and Performance

Most people in management seem to believe that culture matters in a company. I know some don’t, and I’ve worked for a few of those people, whichi s never an enjoyable experience. As the world seems to change to more and more knowledge work for people in technology, it seems that businesses are starting to realize that the way their employees feel about the company can have a direct impact on the company’s bottom line.

There’s an article about culture and motivation in the Harvard Business Review that I think does a good job of looking at how well people perform when they have various motivations. The authors talk about the six reasons why people work, each of which can drive motivation in a different way. Some are positive motivators, some are negative, and it’s good to be aware of the differences.

This ties into culture in that the way your organization is built. The culture that pervades the company can really determine how employees are motivated. More negative motivators result in less performance, especially creative performance, from employees.

I don’t think that building a great team and getting the most from people is necessarily this simple. Different people respond differently to a culture, and the same person might even respond differently at different times in their employment. However I do think that you can look to adjust the way you fit each employee in, with the work you assign, the support you give, and the demands that you make on them.

The mark of a good manager is that they find ways to treat each employee differently, in a way that suits them best, while maintaining a core set of values and rules for the entire organization.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.0MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

Who’s Got a Preteen SQL Server?

This year we will have the end of extended support for SQL Server 2005. Microsoft has been reminding people they should upgrade, which may be important if your organization requires support for the running software. Personally I think if you’re still running SQL Server 2005, and it works for you, why upgrade?

It’s been ten years since that version was released, and while it was a huge step forward with the SQL CLR, better mail, and more, it had plenty of issues. Since that time we’ve had 4 major releases with a fifth coming this year. All of those versions have built on the base of SQL Server 2005’s completely rewritten codebase and have improved many features.

Recently I got a note from someone that noted that they have been upgrading their SQL 2005 systems over the last few years (along with SQL 2008 instances) and have made great progress. however they still have a few SQL 2005 instances that they are racing to upgrade. This person was wondering how many others are out there facing similar struggles.

So this week I’m wondering. How many of you have those instances growing up and getting close to double digit ages? Are you working to upgrade SQL 2005 instances? Have you been thinking about those 2008 instances who are already out of mainstream support and will leave extended support in a few years?

Let us know how you think about your aging database software.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.0MB) podcast or subscribe to the feed at iTunes and LibSyn. 

The New Operating System

I often see pieces written about startup companies and the methods they use to develop software. Small companies often adapt quickly to new ideas and processes. However, many of us work in larger, more established corporations and don’t get the flexibility to change our processes. In fact, for many of us, the idea of developing software faster doesn’t even seem to make sense. Most of us in large organizations (private or public) have fundamentally different views of our clients and market.

However I read an interesting piece on adaptive, more flexible, startup-type companies that does seem to indicate that many of our larger organizations might want to rethink how they work. The piece looks at an operating system as a way of building products or services. What’s most fascinating to me is that this piece looks at the way in which large, established companies have failed while smaller, younger companies that operate differently have succeeded.

In many ways, software is eating the world. As playing fields level in many industries because of both cheap computing power and complex software, I suspect more and more organizations will feel pressure to become more adaptive. I would especially think this will happen as new, younger managers mature, with new ways of thinking. Those managers that haven’t spent decades working in a factory mentality will see the power and advantages of trusting people, setting a vision, and then allowing people to build software that does the work.

However there are plenty of organizations where you have a wide range of skills, and of employee motivation. There are certainly employees that don’t want to innovate and just want to do the job they’ve been doing. There will always be a need for some grunt style work (though maybe less in technology). We certainly need to adapt any system to ensure those employees that need more supervision to transition to new ways of working get that help.

I expect that we’ll also see more public groups adapt these techniques as well. There is increasing pressure on reducing costs, as well as the need to accomplish more and more, perhaps with smaller groups of people. In many cases this means that software should do more. I would hope that more public organizations adapt their processes in order to meet the demands they face. Those that have, already stand out.

It’s going to be an interesting world the next decade to two, and I really do think we will see more companies adapting the way they work to something that leans towards embracing and allowing creativity and innovation more than following rules by rote.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.8MB) podcast or subscribe to the feed at iTunes and LibSyn. 

Queues in Databases

I ran across this post, The Database as a Queue Anti-Pattern, and it struck a chord. I both agree and disagree with the premise, and certainly see this as another case of the “it depends.” However, I am curious what others think.

I had just written about this for T-SQL Tuesday #74, talking about our use of the database for queuing emails as a part of the site. It was a high volume system that worked for us. We were, of course, looking at SQL Server as a hammer, and in 2002/2003, there weren’t many messaging systems available at a reasonable cost. These days it seems that Amazon SQS or Azure Service Bus are low cost ways to move messages around, but are they better than a database?

In our case, cost mattered, knowledge and skill mattered, though certainly we could have learned how to work with a messaging system. However we also had the need to feedback potential status and progress information. We also wanted the ability to re-point emails to another process if one failed. For example, if SenderB fails after having retrieved an email, we can update the table and have SenderA pick up the row(s) that haven’t been processed.

I do believe that messaging is an extremely powerful architecture technique for for distributing work as well as scaling to high workloads. I’d like to see more messaging implemented in applications, even those that aren’t Twitter scale. After all, if I exceed the power of a 4 core machine for my small application, it’s much easier for me to add another 4 core machine that can deal with messages than upgrade to an 8 core machine and just delay the scale problem to another day.

I’m curious what the rest of you think. Is the database a good platform for a queue?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn.

Maybe Security is Harder than Rocket Science

I was giving a talk last year about software development and I made an off-hand remark that most of developers out there shouldn’t be writing authentication schemes for applications. My reasoning was that most people aren’t good at writing these systems and there are plenty of good authentication schemes already written that you can incorporate into a system.

However there are some bad ones as well. While I hope you don’t pick any of them, I also know that many of you might just build a poorly architected system because your focus isn’t on authentication. Your focus is on some other aspect of your application. I’m sure you know some of the good advice for building systems, as well as the best ways to handle passwords, but do you follow it? Under deadlines? When you have pressure to focus on more important aspects of your system? Or do you implement anti-patterns because it’s easy?

The European Space Agency (ESA) is full of rocket scientists. Literally, as they send rockets and astronauts into orbit around the earth. However they were hacked recently and the disclosures aren’t pretty. They not only had personal information released, but passwords were stored in plain text. What’s worse, 39% of the passwords were three letters.


I’m sure many of the people working on ESA systems were smart individuals, and they may be great web developers that build beautiful, useful sites. However their security programming is extremely poor, and really, there’s no excuse. Not even the pressure of scientists that want simple, easy logins.

It’s 2016. No short passwords, no limitations on complexity such as preventing special characters (one site recently didn’t allow a “,” for me), and no storage in a reversible format. There are lots of best practices, but they require some effort to learn, understand, and implement, as well as modification over time to keep up with changing guidelines.

Or, as I suggested, just stop implementing this yourself. Use some authentication scheme that’s been shown to work well with few vulnerabilities.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn.

What is the True Version of Code?

As I’ve been working on more development tasks, I regularly hear from developers that a version control system (VCS) is the “true” version of our code. We should always look to a VCS for the actual code that we need to work with. I believe that, and I think it’s true.

However I heard a DBA say that the true version of the code is always in production. That’s what’s being executed, especially in the database world. There is some truth here as well, but in my mind this speaks to a broken process. This means we have a more chaotic, and less “engineered” way of producing our software. Perhaps more importantly, we don’t have a good source from which development can proceeed.

Certainly hot fixes need to occur, and there are times that you can’t wait for a set of changes to be made in development, tested, approved, and then deployed to production. I think you can overcome most of this, but there may always be cases where this happens, so I don’t think we should  argue about how fast a software deployment pipeline we can build.

Instead I’d say that production changes should always be fed back to the VCS. Our code is an ever changing set of files that we can only really keep track of by using a proper VCS. We should be able to track back the view of production at any point in time to a set of versions for our files in a VCS.

Certainly some of you have gotten by for years, perhaps your entire career, without a VCS. However I’d argue that isn’t a good position for your organization. What habits and skills you have could easily be lost if you leave, and your memory isn’t a point of record. A stable system, a VCS, should note the state of our systems, both in development and production.

And if you’re worried about the cost of a VCS, there are plenty of free (as in beer) ones. I recommend Git, but there are plenty to choose from. In terms of overhead, certainly you can use tools like SQL Source Control or ReadyRoll, but even without those tools, using a VCS really just requires building some habits. If you aren’t willing to change your workflow slightly, there’s probably no way you will ever ensure your environment is under control in the long run. However, if you do change to a VCS, I think you’ll find it’s well worth the effort.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Training at the Top

Many of us see flaws and problems in the way that we code securely as we build software, as well as the way in which our infrastructure security is configured. There have been no shortage of times in my career when I, or a coworker, wondered why our company didn’t work to implement better security in its systems.

Perhaps it wasn’t us. Perhaps it’s not a lack of desire, but maybe it was due to a lack of knowledge. I ran across a piece in Enterprise Security that notes we should have security training starting at the top, with our C-level executives. Far too many of them don’t necessarily understand the threats or nature of the threats because many of these threats didn’t exist 20, or even 10, years ago. Often we have management that has never faced these kinds of vulnerabilities.

I think there’s certainly room for most of us to learn more about security, especially database security and SQL Injection as these are fundamental issues around some of our most important assets: our data. However when we want to implement stronger security, or limit access, we need the support of management, who themselves need to understand the issues, not just respond to whoever makes the best case, or complaints the loudest.

The world has changed, in that our valuable assets can be transferred to our competitors, or common criminals, and we aren’t away of the disclosure. Or perhaps worse, our enemies could change some data and we might never know without the ability to perform comprehensive audits of our systems, something many of us might not be able to do. We certainly need technical capabilities, but also the time and support from management.

I think there is a good case to ask our management make an effort to understand cybersecurity, and I’d urge you to pass this link along to your management.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn. feed