A Buggy Release

I definitely believe in a DevOps process, though a thoughtful, incremental one. I think this is the best way to develop software, whether you release every day or every year. Yes, you can implement DevOps and release once a year. You just end up tracking, testing, communicating, and being ready for that once a year release. Of course, I bet you don’t release once a year since I’m sure you’ll patch the system at least once.

One of the core principles of DevOps is to use automation where you can. Remove humans and ensure that repeatability is possible for moving software from one machine to the other. Communicate, test, and then alter your process to work better. This requires the monitoring and input of humans to examine the process, but they shouldn’t be involved in deployments other than approving them. It’s too easy for an individual to make a mistake.

However, DevOps isn’t a panacea for building better software. Witness the issues at Knight Capital, where they went from having $364mm in assets to losing $460mm in 45 minutes. Mostly because of a problem deployment, where an engineer didn’t deploy code to all the servers in their farm. Certainly a clean deployment to every system might have prevented this, but the reuse of old flags in code is problematic, as is leaving old code around that could be executed.

In addition to moving to a DevOps mindset, I’d also say that you should be sure that you follow good software development practices as well. Clean out old code (including database code) and be very, very careful about reusing any part of your software, including flags, for a new purpose. It’s far, far too easy to make mistakes here.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.5MB) podcast or subscribe to the feed at iTunes and Mevio .

Where’s the Unit Testing?

I’ve been a proponent of Unit Testing, especially for databases. I’ve given presentations on the topic and advocate the use of techniques to verify your code works, especially over time when the complexity grows, new developers change code, and potentially introduce regressions. I’m not the only one as I saw a question recently from Ben Taylor asking where has unit testing gone?

I was disappointed that few people have responded to the piece, and I think this is the same response that unit testing in front end application software received a decade or two ago. Few people saw value in testing, preferring to assume developers will code well. Over time, and with some investment, quite a few people have seen the value of unit testing, though I’m not sure it’s the majority yet. In building database software, we’re still woefully behind, preferring to use ad hoc tests that are subject to human frailty (forgetfulness, making mistakes in running tests or not examining results closely).

I do know a few people that are customers of Redgate and use unit testing extensively in their database code. They definitely spend a lot of effort building unit tests, often having more test code than feature code, but they also have very low rates of complaints and bugs from users. I hope that more people having success will publish details on their unit testing successes and failures, and I’d welcome more pieces at SQLServerCentral on either side of the issue.

For many people writing in-house applications, especially those installed in one location, perhaps a few bugs aren’t a problem. Maybe the impact is low enough that training developers to write tests and making the investment isn’t valuable enough.  However, for those that have disparate external clients, or maybe install software in many locations, I bet that moving to a more thorough set of repeatable, reliable, non-trivial tests will improve your software quality.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 3.2MB) podcast or subscribe to the feed at iTunes and Mevio .

Scary Deployments

I was listening to a web developer talk about some fundamental changes in a web platform. In this case, an older system was being replaced completely with a new one, and as one of the reasons, the developer showed some typos that had existed on the old site for years and hadn’t been fixed. The reason? This quote:

“Very few people understand how the entire system works that are still in the building … The thought of deploying [changes] brought people to tears.”

That can’t happen. Ever. We can’t be afraid to touch systems. When this happens we get paralyzed, and we don’t do good work. Or we’re not a good fit for a project. Or perhaps, we’ve got a bad attitude.

I’ve worked in a few companies where developers were afraid to touch a system. It’s amazing how quickly this attitude becomes contagious, even scaring management from considering change. In today’s world, where it seems to need to change and respond to a changing world, that seems like a recipe for decline, not growth.

One of the founders at Redgate mentioned that if something is hard, we should do it more. If touching software is hard, document and test more. If deployment are scary, then you should work to reduce the fear and problems, using the power of computing and scripting to mitigate risks and smooth the process out. That’s a large part of what DevOps is about. Reducing the risk and issue of moving software from the development to production environments.

Don’t let yourself be scared by software or deploying changes to a system. Have confidence and make things better.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and Mevio . feed

Building locally from VSTS

One of the things that you need with a Continuous Integration server is that ability to build your software on some computer system and verify things work. With Visual Studio Team Services, using Visual Studio online, this seems to be a challenge.

The VSTS team has thought of this and includes the ability to target your builds on a hosted system in the VS cloud, where you don’t need anything installed. However the hosted build agents and servers have a limited amount of software that’s supported. SQL Server isn’t on that list.

However, there is another option. If you go to the control panel for your account, and click the Agent Pools tab, you’ll see something like this.

2016-06-28 18_03_29-Agents for pool Default - Microsoft Team Foundation Server

Notice the “Download agent” link. That’s what you want. As you can see, I’ve been testing and I have agents actually setup and registered on 4 machines. Here I’m going to add a fifth.

Once I download the file, I’ll extract it to see a list of files and folders.

2016-06-28 18_01_52-agent

What I first want to do is configure the agent, so I’ll enter ConfigureAgent from a command prompt. Note, I need to do this as an administrator level command prompt.

2016-06-28 18_08_35-cmd - ConfigureAgent.cmd (Admin)

In my case there are some settings, but I’m overwriting them as I rebuilt my machine. Once I hit Enter, I then get the chance to Authenticate.

2016-06-29 09_18_49-

After this, the old Agent appears. However, since I’ve rebuilt and rename this machine, I’ll change it. I answer a few more questions about configuring the agent properties. At the end I’ll also authenticate to the Azure cloud once again.

2016-06-29 09_21_09-cmd (Admin)

Now that things are configured, I can run the agent. I could set this as a service, but I prefer to know the agent is (or isn’t running) and see the output. I have set this as a service before, and it works fine.

All I have left to do is RunAgent.cmd and I have an agent running locally that takes instructions from VSTS.

2016-06-29 09_26_45-cmd - RunAgent (Admin)

If I go back to my control panel, I see a new agent.

2016-06-29 09_32_41-Agents for pool Default - Microsoft Team Foundation Server

I can also trigger a build. I happen to have one setup for a project that points to local instances. Here’s the build definition, which uses the VSTS Extension from Redgate to build a database.

2016-06-29 09_34_05-Microsoft Visual Studio Team Services

I can click “Queue Build” and a build will start.

2016-06-29 09_34_18-Microsoft Visual Studio Team Services

I see the build running in the console:

2016-06-29 09_34_31-cmd - RunAgent (Admin)

And online in VSTS if I want. The Build Agent sends logs back to the VSTS service as it’s working.

2016-06-29 09_36_19-Microsoft Visual Studio Team Services

This is a basic way to get a VSTS build working on your local machine with an agent. There is a lot more to configure if you need to, and if you need multiple agents, you can certainly pay for them with a different VSTS plan.

The Bad Data Shutdown

I’m a car guy. I like cars, I like driving, and I’ve spent a lot of time and money over the years on vehicles. I’ve swapped and enjoyed a few dozen automobiles as part of my life. If you are on Twitter, you might occasionally see @BrentO and myself go back and forth on some car topic, usually Porsche related. This usually results in an hour or so of life wasted on dreaming of a new car (including getting distracted while writing this piece and pasting that last link in, where I spent quite some time drooling over the Macan).

Recently there was an issue with the navigation system in Lexus vehicles. Apparently bad data was sent during a software update, which is not exactly what you want to happen in a car. I’ve had a few modern vehicles, some of which would be quite handicapped if they onboard computer were frozen or rebooting. In my current vehicle, this would cause issues with climate, navigation, entertainment, and potentially other systems. After all, I suspect many things from door locks to speed control are all integrated together. Certainly the drivetrain is as an open door will automatically shift my car from drive to park, at least at low speeds.

As we move to more drive by wire, bad data or bad software that disrupts the computer systems could be very dangerous. It’s not just updates, but this could even be some internal Denial of Service issue from a USB device or bluetooth connection. In this case,  Lexus acknowledged the problem, which I’m glad to see. The Internet ensures that problems can be reported from many users quickly and very publicly. That makes it hard to deny a widespread problem.

Delivering updates across wireless links is great. It’s cheaper for everyone, saves time, and owners appreciate convenience. However, moving to this model often requires some sort of continuous delivery (CD) process, which should also allow for rolling forward, and releasing fixes for problems. However, if the updates you deliver cause the system to cease functioning, then this doesn’t help. At the last, your QA system needs work and you don’t have a well designed software delivery process.

Various companies are getting better at delivering updates to our systems without downtime, but there’s still work to be done. The smaller your domain of clients, the easier this is. For many of us that work on small systems, an application server or two and a database, we can certainly get much better at ensuring our updates are tested, and more importantly, that we can quickly deploy a second patch if we find an issue. That requires engineering a process that is known and stable, with the ability to respond quickly. For larger systems, with many clients, you need a really solid engineering and deployment process.

Above all, however, no matter what your deployment mechanism for updates, you need to be sure that any data you include is at the quality level you’d expect would be delivered to you.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 4.7MB) podcast or subscribe to the feed at iTunes and LibSyn.

Better ReadyRoll Script Naming

One of the things that I like about ReadyRoll is that the product will automatically build new scripts that are named in a sequential order. This usually results in a few scripts that look like this:

2016-06-14 11_05_15-Movies & TV

As you can see, these scripts aren’t very intuitive. In fact, if you get lots of scripts, this starts to look fairly complex and confusing. What about something more like this:

2016-06-14 11_08_13-Movies & TV

That’s easier to read and understand. I’d also have a better idea of what happens in each script. How can I do this? It’s easy.

Add an Object

First, let’s add an object in ReadyRoll. I’ll alter my Exams table to add a few columns. To keep this simple, imagine I want to add a modified date and a short description. I could do this in SSMS, but I’ll open the designer in VS. Here’s the table.

2016-06-14 11_10_30-Photos

I’ll make my changes.

2016-06-14 11_11_01-Photos

Now, I click the “Update” button in the upper left. When I do this, I get a Generate script item. I could do other things, but I like to do this and see my script before applying it to the dev database.

2016-06-14 11_12_04-Movies & TV

I click Generate, and I get the script. Notice, it’s named with some random number (after the 0004) on the right.

2016-06-14 11_12_34-Photos

If I right click the script, I can do all the normal file operations.

2016-06-14 11_13_32-Photos

Let’s give this a more descriptive name. It’s taken me a long time from my 8.3 name days, but I’ve learned to take advantage of file names to make them descriptive. A few bytes in a name is cheap.

2016-06-14 11_13_58-Photos

That’s it.

ReadyRoll does use the first characters in front of the underscore (_) to order scripts, so I don’t want to change those. I could, but in this case, I need script 4 to come after script 2 at the very least.

After the underscore, I can do whatever I like. In this case, I can see the changes being made to my database, just reading down the scripts and seeing how things will occur. I always have the detail in the code, but at a high level, I can see the changes.

I’m sure if you adopt this technique, you’ll find that it’s much easier to manage scripts and track what’s happening to your database.

We Manage Algorithms

“Every business is an algorithmic business.”

That was a phrase that Microsoft’s Joseph Sirosh used in a keynote at SQL Nexus, talking about the future of software and data. Rather than managing data, many of us will move to manage algorithms, which will determine how data is interpreted, used, processed, and potentially returned to users as information. There are starting to be too many sources of data, too much data itself, being generated at too quick a rate, to the point where algorithms become more important than the actual data in examining, grading, interpreting, filtering, and more.

This is exciting on one hand, with new opportunities for those that can develop, choose, write, tune, or enhance algorithms. I can easily see greater influence from both developers and DBAs as we work to better manage the floods of data. Especially with 50 billion sensors, IoT devices, and more that are predicted to be online in the 5 years. That’s potentially a tremendous amount of data being generated.

On the other hand, this is a bit scary as separating good data from bad in the ocean of bits, and choosing helpful rather than hurtful algorithms might create lots of stress, and perhaps even fewer opportunities if few algortihms are reused. This also means we will need algorithms that can help us determine if data is actually good enough to use. After all, in the deluge, there will be bad data, that potentially needs to be excluded from queries. Will software developers become more important than DBAs as we end up with more unstructured data stores, data lakes, or other constructs that might require less administration?

I’m not sure how things will change, but it will be an interesting world the next few years as we work with larger and larger, more diverse sets of data in our organizations.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.3MB) podcast or subscribe to the feed at iTunes and LibSyn.

Abstraction

One of the core tenets of good software design is to abstract away details in any particular part of an application. We want to use well defined and constructed interfaces so that the implementation of any particular method or section can change without affecting the way the system works. This allows for improvement and upgrade over time. The same thing occurs in hardware, where we can replace a hard drive, a graphics card, or other components and the system should still function in the same manner. There might be a bit of work, such as updating a device driver, but the core system should still work.

This is also present in the real world. I can replace the wheels and tires on my car, since as long as I have the same pattern for the bolts to attach to the axle, things still work. Electrical systems work this way, allowing any device that has the correct plugs and uses the expected voltage to interface with an outlet. The examples of abstraction are numerous, and the more we use abstraction, the more flexible our systems can be. Where we haven’t abstracted away details, it becomes complex and expensive to change part of a system.

In a database setting, we want to use abstraction where possible. The use of views or stored procedures (or functions) allow the underlying table implementations to change without an application being too tightly coupled to the structure. This isn’t always well adhered to, despite the well known practice of building a data access layer into an application. Too often developers want to tightly couple their application to the underlying table structure.

But, how abstracted should you be? Certainly I’d hope that your application had the ability to easily change connection settings, and these days I’d hope you actually had two: a read/write connection and a read-only connection. What about the database name? Should that be abstracted away? Again, I’d hope so, even in a multi-database application. If for no other reason than to simplify development by allowing the database name to change on a development server. Certainly security objects, especially encryption mechanisms, need some abstraction to prevent the requirement that they exist in non-secure environments.

Are there other abstractions you’d want to see widely implemented? I wonder what other examples might be important to developers or DBAs out there. I know that allowing abstractions also brings complexity, and the ability to change those values between environments is critical. This could be performed with injection of different parameters as software changes are deployed, but the mechanisms for doing this are still immature and not standardized.

There are plenty of other places we can abstract away implementations, but without mentioning too many, I want to know what you’d like to see. What abstractions would you want, or do you implement in your systems?

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.7MB) podcast or subscribe to the feed at iTunes and LibSyn.

Accept Failure

Today’s Editorial was originally published on Feb 22, 2012. It is being re-run as Steve is out of town.

We don’t expect ourselves to be perfect, do we? Is there ever any project you tackle that you might not complete? Is there a doubt that it might not work as expected, or that it may need substantial rework? I think that the vast majority of projects I undertake have some level of risk involved, and while I might understand that, I’m not sure I ever believe I will ever fail.

Most things that I’ve built in technology don’t work the first time, and in fact, I expect that. I have learned from mistakes, corrected the problems, and usually finished them with some level of success. That’s the way that so many of us in technology approach our jobs. We start building, find issues, and then fix them.

However you cannot every eliminate the risk that something will fail. There are times we need to abandon the project or abandon the work done and rebuild the software from scratch. Those failures should be learning opportunities, and should allow developers to improve their work. From my perspective it seems that too many managers, however, view failures as events that have to be avoided. Perfection and success are the only possible outcomes that are acceptable. One slip up and you may get fired.

It seems that’s what managers think about their career, so they continue to push down dead end roads, and throw more resources at a project to recover some small level of success.

We will always make mistakes. The true failure should come from failing to learn from the mistakes and improving your future work. If management cannot tolerate these setbacks, this problems, and allow for them, then the work will not only continue to be substandard, but people will spend more time worrying about avoiding blame than actually looking to improve their skills.

I can’t tell you when work should be abandoned, or a project is hopeless, but every project ought to be examined periodically for this situation, especially when it is apparent that it is in trouble. You can’t save all projects, but you can learn to let some of them go, or change the situation, before it becomes a bigger problem than it is.

Steve Jones

 

Great Developers

This editorial was originally publised on March 12, 2012. It is being re-run as Steve is on vacation.

Is a great software developer worth 100 average ones? On one hand I think there are some good arguments that it’s not true. One developer certainly can’t write the amount of code that 100 average ones can. However there’s another way to look at things. A great developer can do things that the 100 will never think of, or never consider. He might not write the code that does as many things as 100 people, but I think a great developer could easily write code that performs a hundred times faster than the code 100 developers write.

That’s why you always have an open position available for a great developer. If one is available, and they rarely are, you hire them if they want to work for you. You can always find things for them to do, and they can make improvements in code that your other 5, 10, or 20 developers will never come up with. I’d make sure they fit in your team and get along with others. You can get less work done if you have someone that is too difficult to deal with or too critical of others. While a great developer can accomplish things that others can’t, or won’t, they can’t do all the work.

Ultimately I think that managing great developers is hard, and they are unlikely to stay with your company for a long period of time. However they are rarely available, and for a few years, they might jump start the evolution of your software, and potentially build something that makes your software great. I’d always have an open spot in my team for a great developer, and hire them as soon as they came available, if I thought they would fit in well with the rest of the team.

Steve Jones