Reasonable Timeframes

Many of us hear about problems with various systems on a regular basis. We report them in Database Weekly far too often, and I’m sure a few of you have been informed (or found) bugs, nagging issues, even vulnerabilities in your systems.

What’s a reasonable time to fix a system after an audit?

It’s a simple question, but it’s not very simple to answer. After all, most of us don’t have a lot of slack development time built in to fix issues. Unless the issue is a broken application that doesn’t work, the items disclosed in an audit need to scheduled in among other work. After all, most of the time the audit finds something that no one is aware of, or no one has wanted to fix. This is work that no one really planned on completing.

I ran across an interesting piece about the Employment Department for the state of Oregon hasn’t fixed a number of issues after an audit last year. While some strides have been made, there are still outstanding issues, the sum total of which it is estimated will take a decade to complete. That’s a long time, but in large systems, especially ones where the entire application cannot be rewritten because of resources, it’s not unusual. I’ve worked in a few places where we had large scale systems that we knew had issues, but we couldn’t easily re-design and implement fixes in any reasonable length of time. Often this was because of downstream dependencies, but certainly culture and management hadn’t made change a priority.

I sympathize with those people dependent on mainframe systems. The power and reach of those systems, the poor documentation, not to mention the complex training required to change clients’ habits is huge. I would hope that the government groups using these large scale systems would work together to jointly proceed on development, with politicians also involved to help standardize the requirements across state lines (or countries’ borders) and simplify the software needed.

However, no one ever makes software simpler, especially when it’s being designed.

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Queues in Databases

I ran across this post, The Database as a Queue Anti-Pattern, and it struck a chord. I both agree and disagree with the premise, and certainly see this as another case of the “it depends.” However, I am curious what others think.

I had just written about this for T-SQL Tuesday #74, talking about our use of the database for queuing emails as a part of the site. It was a high volume system that worked for us. We were, of course, looking at SQL Server as a hammer, and in 2002/2003, there weren’t many messaging systems available at a reasonable cost. These days it seems that Amazon SQS or Azure Service Bus are low cost ways to move messages around, but are they better than a database?

In our case, cost mattered, knowledge and skill mattered, though certainly we could have learned how to work with a messaging system. However we also had the need to feedback potential status and progress information. We also wanted the ability to re-point emails to another process if one failed. For example, if SenderB fails after having retrieved an email, we can update the table and have SenderA pick up the row(s) that haven’t been processed.

I do believe that messaging is an extremely powerful architecture technique for for distributing work as well as scaling to high workloads. I’d like to see more messaging implemented in applications, even those that aren’t Twitter scale. After all, if I exceed the power of a 4 core machine for my small application, it’s much easier for me to add another 4 core machine that can deal with messages than upgrade to an 8 core machine and just delay the scale problem to another day.

I’m curious what the rest of you think. Is the database a good platform for a queue?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn.

Maybe Security is Harder than Rocket Science

I was giving a talk last year about software development and I made an off-hand remark that most of developers out there shouldn’t be writing authentication schemes for applications. My reasoning was that most people aren’t good at writing these systems and there are plenty of good authentication schemes already written that you can incorporate into a system.

However there are some bad ones as well. While I hope you don’t pick any of them, I also know that many of you might just build a poorly architected system because your focus isn’t on authentication. Your focus is on some other aspect of your application. I’m sure you know some of the good advice for building systems, as well as the best ways to handle passwords, but do you follow it? Under deadlines? When you have pressure to focus on more important aspects of your system? Or do you implement anti-patterns because it’s easy?

The European Space Agency (ESA) is full of rocket scientists. Literally, as they send rockets and astronauts into orbit around the earth. However they were hacked recently and the disclosures aren’t pretty. They not only had personal information released, but passwords were stored in plain text. What’s worse, 39% of the passwords were three letters.

Three.

I’m sure many of the people working on ESA systems were smart individuals, and they may be great web developers that build beautiful, useful sites. However their security programming is extremely poor, and really, there’s no excuse. Not even the pressure of scientists that want simple, easy logins.

It’s 2016. No short passwords, no limitations on complexity such as preventing special characters (one site recently didn’t allow a “,” for me), and no storage in a reversible format. There are lots of best practices, but they require some effort to learn, understand, and implement, as well as modification over time to keep up with changing guidelines.

Or, as I suggested, just stop implementing this yourself. Use some authentication scheme that’s been shown to work well with few vulnerabilities.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.6MB) podcast or subscribe to the feed at iTunes and LibSyn.

What is the True Version of Code?

As I’ve been working on more development tasks, I regularly hear from developers that a version control system (VCS) is the “true” version of our code. We should always look to a VCS for the actual code that we need to work with. I believe that, and I think it’s true.

However I heard a DBA say that the true version of the code is always in production. That’s what’s being executed, especially in the database world. There is some truth here as well, but in my mind this speaks to a broken process. This means we have a more chaotic, and less “engineered” way of producing our software. Perhaps more importantly, we don’t have a good source from which development can proceeed.

Certainly hot fixes need to occur, and there are times that you can’t wait for a set of changes to be made in development, tested, approved, and then deployed to production. I think you can overcome most of this, but there may always be cases where this happens, so I don’t think we should  argue about how fast a software deployment pipeline we can build.

Instead I’d say that production changes should always be fed back to the VCS. Our code is an ever changing set of files that we can only really keep track of by using a proper VCS. We should be able to track back the view of production at any point in time to a set of versions for our files in a VCS.

Certainly some of you have gotten by for years, perhaps your entire career, without a VCS. However I’d argue that isn’t a good position for your organization. What habits and skills you have could easily be lost if you leave, and your memory isn’t a point of record. A stable system, a VCS, should note the state of our systems, both in development and production.

And if you’re worried about the cost of a VCS, there are plenty of free (as in beer) ones. I recommend Git, but there are plenty to choose from. In terms of overhead, certainly you can use tools like SQL Source Control or ReadyRoll, but even without those tools, using a VCS really just requires building some habits. If you aren’t willing to change your workflow slightly, there’s probably no way you will ever ensure your environment is under control in the long run. However, if you do change to a VCS, I think you’ll find it’s well worth the effort.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

Push Button Deployments

I worked in an Agile team in 2000 and we deployed changes to our web platform every Wednesday for over 18 months. That was about the length of time I was involved before the small startup failed, but not for lack of technology. In fact, I managed a few weeks extra pay by helping sell and deploy our IP to another company.

Deploying regularly and often, to meet business needs and get features into the hands of customers is important. For most businesses, this is something the CTO and CEO want. It can work, and there’s a case study with Nmbrs, a Redgate customer, that has used DLM with this quote:

“The productivity of the team has improved dramatically. We were doing one database update a week, which took almost a full day. Now it takes ten minutes. We can spend the extra time we’ve gained developing new features.”

That’s been my experience with tooling and automation. If you get things setup, as I did with my startup, then you gain confidence in deployments, are willing to make them more often, and get these small chunks of software to customers on a regular basis. It’s not just customers, as we found we could make some fundamental changes in databases and applications that helped smooth development later without affecting customers.

I know I’m paid by Redgate to talk about DLM, but the concept, the idea, the processes really do work well. They’ll help you get features to customers quicker (and feedback to developers faster) as well as give you confidence that you can release, if you find an issue with your application.

The Ongoing Costs of Building Software

I’ve almost never heard of a developer backing down from solving a problem quickly. In fact, I think most developers assume they can solve many problems quickly and whip up useful software. I know I’ve felt this way numerous times in the past. Need a little monitoring? I can knock that out this week.

However there are costs to maintaining software over time. While many of us can build scripts and small programs that solve problems, we also learn that those little pieces of software need some care and attention over time. That’s well illustrated in the piece titled If You Build It, They Will Complain.

I had a similar path to that of the author. I built a utility, then another, then another for various departments and people inside of my company. I knocked out useful software that others used. And then I got caught up in enhancing, repairing, and otherwise attending to said software. It was an eye opening experience to realize that often software is never done, unlike the experience I’d had in school.

I saw a very experienced SQL Server professional note a few years ago that they would never write monitoring software again. It’s not that this person couldn’t, or that they wouldn’t do a good job, it’s that they didn’t want to maintain the software, and there are plenty of good packages that do most of the job needed. This person could fill in missing functionality in small areas, while spending their time on more valuable tasks.

That’s how I think most of us should look at software. We should look to reuse and take advantage of software that’s written to accomplish tasks, evaluating costs against our time. It’s far too easy to discount an employee’s time, when we could easily outspend the cost of a piece of software already written by someone else if we tackle it on our own.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

The Advent of Code

In early December, I ran across a post from Jeremiah Peschka on practicing your coding skills. Apart from the points made about practicing skills, he referenced a site called the Advent of Code. It’s a site that uses a Christmas theme and a little gaming to get people to solve programming problems. I mentioned it on my blog, and across a few weeks, a few people in the SQL Server community started working on the problems in SQL.

Probably like most people that started this, I didn’t finish in December. In fact, I didn’t work on problems most days because life got in the way. However I haven’t given up. Around vacations and other events, I’ve continued to work on solving the problems in multiple ways, using different languages. I’ve tackled problems in Python, PowerShell, and T-SQL.

I’d urge you to give some sort of programming challenge a try, mostly just to gain some practice in building algorithms and investigating the ways in which you can solve problems with software. If nothing else, the exercises challenge your brain and exercise your mind.

I’m considering working on something like this for SQLServerCentral next year, with a series of complex problems that you can work on to practice your T-SQL and SQL Server skills. I don’t think I’ll make it an Advent of T-SQL for Christmas, but I do think having a series of challenging problems is a good way to drive your learning.

If you’ve got suggestions for other programming exercises, feel free to pass them along.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.0MB) podcast or subscribe to the feed at iTunes and LibSyn.

Rulebreaking Developers

The VW emissions scandal fascinates me. Maybe because it’s a case of software being used to fundamentally alter a product to deceive us. Maybe because I’m a car guy and have a soft spot for VW and the old Scirocco I used to drive in college. Maybe because we have blame being passed between management and software developers that could potentially affect our industry.

In any case, the latest news I saw was of a press conference where the upper management of VW seems to indicate that the software developers on the project decided to alter the software. Certainly I’m sure some managers were involved, but I still fundamentally find it hard to believe that a small team engineered these changes independently. Even if they did, I think that ultimately upper management throughout the company has to take responsibility. After all, they took credit for the sales.

There are a couple interesting points made by the company. First, they are upgrading IT systems to track work better. I’m hoping they’re moving to more of a DevOps practice, with transparency and openness, as well as lean processes that can be changed quickly. I suspect they might be moving to more overhead and auditing, but time will tell.

The other interesting thing is that the company noted that even when technical solutions were available, they weren’t used. To me, this directly points to the problems so many of us face as technologists. When we have better solutions, or we can correct deficiencies, management doesn’t want to spend the time or effort to do so.

That is one of the fundamental things that makes software so powerful. We can make changes, we can improve things, we can implement better security in a way that’s a fraction of the cost of making similar changes in the analog world. However it seems that even smaller efforts aren’t worth making if there is a little more profit to be squeezed out. At least, that’s what managers seem to think.

That’s a part of business culture that I’d like to see evolve. Let’s look to continually improve things, making them better, more secure, and less bug ridden, even if it slightly slows down new enhancements or costs a little money.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn.

Is Your Deployment Process Perfect?

I’m sure your deployments aren’t perfect. I’ve had many successful ones, but I have always found some flaws in my systems over time. I’ve also found that the less flexibility I allow, the more likely things are to succeed.

However less flexibility isn’t what most companies or clients want. In fact, they want the opposite. I know most developers want this as well, without constraints on what they can do.

I think Redgate has been working to make things easier and more reliable for most companies. Certainly we have work to do and there are places to improve our products, but our DLM work on various products is really designed to reduce, or eliminate, silly mistakes that cause issues. We’ve tried to help companies set up a process that limits regression issues.

I saw this short video from Redgate, where the lead Data Anlyst at Yorkshire Water talks about some of the improvements in their development process come about by implementing a DLM process. Over a couple of years, they’ve gotten many of their projects into a Continuous Delivery stack and that’s just the way they do things.

Yorkshire Water

The time savings are the big issues. There is less copying of production data around, more confidence in development, and as Shaun notes, time is money.

If you think you want to build a more repeatable, reliable database development process, I think we give you a few options. The traditional, work in SSMS approach with SQL Source Control, or our Ready Roll work in Visual Studio. I’m particularly excited about Ready Roll as that’s how I’ve done a lot of development in the past.

And, of course, I think our DLM Automation suite, allowing you to deploy changes to a CI environment and build deployment packages works well.

Correct Old Mistakes

I ran across this piece on the VTech hack that recently occurred. It’s almost a classic example of what not to do in data storage. You can read the piece, and also look at Troy Hunt’s analysis, but clearly we can see that poor encryption, unencrypted communications, plain text storage of passwords, and more. What’s especially disconcerting is that we have kids’ information disclosed, plenty of which could be problematic years down the road as these kids grow up.

Apart from all of the technology issues, there are certainly responsibility issues. I expect that VTech will deny knowledge of issues, and certainly limit the amount of time they admit to knowing about the security issues. After all, they’re a corporation and if they can deny liability it could certainly limit the number of actions taken against them. However I’m hoping that the developers and operational people that manage this technology realize they made mistakes while building these systems.

There’s a certain immaturity that’s prevalent in the analysis of this system. I’m guessing that developers were under pressure to get websites up and running, in concert with product launches, and that plenty of code was shared among their various sites and web domains. However I would hope that the current developers at VTech would have learned more about building robust applications, and would be looking to rewrite and rebuild their systems to be more secure with more current technologies. I hope that any of you running Flash based systems, or using MD5, or any other well known, poor security practices, would be pressing your management to correct those deficiencies and giving them more secure solutions. You might also give them a copy of the article linked above.

Likely most companies out there, I’m guessing VTech’s management don’t want to spend money to rebuild systems that work, regardless of security flaws. Likely developers that have learned how to better code public facing sites don’t have the time to spend rewriting old code when they have new systems to develop. However I think that the old code that lives out there, that poorly built code that most of us have written in our past, would get updated over time, as part of the cost of doing business. This is especially true for anyone using encryption, where upgrades should be regular and mandated as Moore’s law and better mathematics consistently eliminate the security provided by older algorithms.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn.