I was speaking with one of the development teams at Redgate earlier this year. They were working on a product, and had planned out a few sprints worth of work. Each sprint, called a train, was a couple weeks long, with specific goals and ideas to be implemented. That was all good, but I noticed that there was a sprint in the middle of the list that was devoted to technical debt.
Technical debt is a strange term. It’s one that many managers don’t understand well, often because the code may work fine. I ran across an interesting piece that looks at what the concept means, with what I think is a good explanation. We get technical debt when we sacrifice maintainability to meet another requirement. The piece also looks at the accumulation of the debt and why it becomes problematic later. Certainly the more debt that piles up, the mode difficult it can be to change code. Since we are almost always going to go back and maintain code, this becomes a problem.
I think the ideas given to keep technical debt under control are good ones. We should make an effort to clean code as we can, though not make it such a priority that we end up causing more work with constant refactoring. We do need to get work done. However the suggestions given require a good amount of discipline and buy in from management, and I’m glad Redgate tries to keep debt under control. I think our developers like the debt trains as well.
I thought the idea was pretty cool until I was looking for a feature to be completed and the technical debt train was running that week. I thought about complaining, but I decided to have patience and wait a week. After all, if the debt isn’t kept under control, I might be waiting much longer than a week for some fix or feature next year.
The Voice of the DBA Podcast
One of the projects that’s been on my list lately is to programmatically access Twitter for a few ideas I want to play with. Since I’ve been trying to learn some Python, I thought I would take a look using Python to update status and read status.
A quick Google search showed me lots of Python clients, but Tweepy caught my eye for some reason. I’m not sure why, but I ended up popping a command line open and downloading the library.
From there, I saw a short tutorial over at Python Central. I started by creating an app at Twitter for myself, which was very simple. Once that was done, I had a set of consumer tokens (key and secret), that I could use. Another click got me to the access key and secret. Note, the easy way to do this is over at dev.twitter.com.
My first attempt was using this sample code.
consumer_key = “ABCDE”
consumer_secret = “12345”
access_token = ‘asdfasdf’
access_token_secret = ‘98765’
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
api = tweepy.API(auth)
public_tweets = api.user_timeline()
for tweet in public_tweets:
This gives me a list of my tweets. At least, the last 20.
I then went to make an update by using this:
However that returned an error:
Hmmm, I double checked the API, and this should work, but I’m guessing there’s another issue. I searched, and found there’s a bug. However, I should be using named parameters, so no big deal. Change the code.
Now it works.
This is the first step for me to look at building an app that might send some tweets on my behalf, perhaps with the data stored somewhere, like, I don’t know, maybe a database?
Could a group of software developers make changes that fundamentally alter the way a software system should work without management being aware?
That’s the question being asked of VW right now. Most people are skeptical, but I ran across a piece that wants to lend credence to the idea that a few software engineers acted with few people being aware. They did this, not because they wanted to defraud everyone, but they wanted a solve a problem that they couldn’t do in other ways. They also didn’t see the alteration of test results as much of an issue because they thought the tests were too stringent.
I’m not sure I believe that’s what happened. Certainly there is some disagreement from various parties, but with my experience in software projects, management always wants to know how things are proceeding, with more and more questions whenever the applications don’t work as expected. When problems are solved, natural human curiosity leads more managers to ask for details, even when they don’t understand. In this case, I can’t imagine lots of VW management weren’t aware that software was being used to pass tests. Many people report to many others, and everyone would have wanted to know how VW solved this engineering problem.
The stakes for organizations will continue to rise in a global economy, and software will play increasing roles in many companies. Will we see more and more pressure to manipulate our world with software, even in criminal ways? I suspect so, and I sympathize with those that might face the loss of employment for not complying with the requirements they’re given.
Ultimately I think transparency of software is the best way to bring about better software that complies with regulations and rules. Transparency also ensures that copyrights aren’t violated (since violators code is available), and we can determine if security is being built into systems. Perhaps best of all, developers can all learn from each other, seeing exactly what works and doesn’t in each system.
I doubt we’ll get there, but transparency would be a nice place to be.
The Voice of the DBA Podcast
I once worked in a company that had a VB6 application (this was a long time ago), which had been mainly written by three developers working at the company. Two of them left, but we still have one of the original developer and five or six others that had worked on the application for a year or more.
One day we were discussing changing a section of the application to add functionality. I was surprised to find that none of the developers wanted to work on the code. They were all “afraid” to make changes. Having been a developer and spent time digging through other people’s code, I was surprised. Certainly some tasks are difficult, but being afraid to change code?
I wish I’d been more knowledgeable then. Today I’d tell the developers the first thing they need to do is write tests. They need unit tests, or integration tests, but they need some way to determine if they are breaking functionality.
And if they do break something, that’s fine. Go fix the breakage. Refactor other code, write more tests if they are needed, and go for it. You learn by breaking things. Your tests protect you and let you refactor code. As much as I realize we don’t want to spend unnecessary time writing tests, we need something to examine our code as we write. We might as well use a testing framework to help. That way we’re not afraid to change the existing application.
The Voice of the DBA Podcast
I’ve written a few posts on the Redgate Software blog to try and show how I see the DLM model, and how we see things at Redgate. We have a lot of developers that work in a similar way when building application software in C#, Java, Python, or other languages, and much of the company is trying to bring more engineering to database development.
Part of what the DLM maturity model aims to do is help us classify how we progress to a more engineered, repeatable, and reliable way of managing database development. You can read my overview, and then dive into each of the various levels we’ve built. The levels are:
- S1 – The Manual Stage
- S2 – Automated Version Control
- S2 – Continuous Integration
- S3 – Automated Deployment
Some of this is based on the CMMI model from the SEI, and some is based on what application developers are doing with their own continuous delivery mode. Simple Talk has written a What is DLM? article as well, and includes a different view of a maturity mode.
I think this process becomes more important over time as we depend more on software and the databases behind them, with fewer and fewer tolerances for downtime or human mistakes in the deployment process.
I’d like to get feedback from people on what they think of this model, and of the idea of engineering better database development. I know many people have built their own process, but far too many of the processes rely on custom scripts that are built and edited for each deployment, sometimes in the middle of the deployment. I think we could actually make database development better if we applied some better structure to our deployment.
Redgate is working on tools to support this, in a few ways, but this isn’t about Redgate. Rather, it’s about building better software for everyone, whether you use Redgate tools, another vendor’s tools, or build your own. Follow a better engineering process.
This editorial was originally published on July 25, 2011. It is being republished as Steve is at the PASS Summit.
A computer deals with interrupts all the time. They are the mechanism by which it can simulate multi-tasking among many different programs in a modern operating system. However those interruptions have a price, and too many of them can affect performance. As hardware grows larger, we have other issues from interruptions that we try to mitigate with techniques like soft-NUMA affinity or multiple pipelines built into hardware. All of these are designed to prevent a computer from spending any significant time on non-productive tasks because of interruptions.
In the real world, many of us deal with regular interruptions from work. They might be emails, instant messages, phone calls on a cell phone, or the old fashioned someone-stopping-by-your cube-to-chat. All of these things add up to less productivity, especially for developers. One study finds a 10 point IQ drop from regular email and phone interruptions. I don’t know about you, but I’m not sure I can afford a 10 pt drop in IQ when I’m working.
Some companies are starting to realize that developer’s brains are a scarce resource, and interrupting them can dramatically impact productivity. I have found some places, like this one, that are setting aside quiet time for developers to work without being bothered. Similar to the technical debt that Steve McConnell has talked about, there seems to be an interruption tax that some development shops are loathe to pay.
Even if you don’t gain any productivity, or have fewer bugs, or ship more often, I think that your developers will appreciate it. It could be an easy way to increase happiness, improve retention, and even sell your company as a good place to work. And it’s easy to implement: just leave people alone a few hours a day.
This editorial was originally published on Mar 7, 2008. It is being re-run as Steve is out of town.
I saw an interesting thread awhile back where one of our very talented community members was asking about how to go about altering data in an application for a demo. It’s a valid scenario and one that I’m sure many people have run into at some point in their career. You want to show data that’s somewhat real so that it showcases the application and what it can do, but you don’t want to show real names, amounts, or any identifying information.
A bit of a quandary and it seems that many people solve it in one of three ways. They just use test data, which is a very small set of data and doesn’t show as well. Or they may alter everyone’s name to something like “Steve Jones”, all phone numbers to 555-555-5555, etc., which looks funny.
Or you just show the production data and wink and say “I don’t usually do this, but since you’re such a valued client…”
So for the Friday poll: Do You Alter Production Data When It’s Copied?
Meaning when the data gets moved to a non-production system (demo, test, development, etc.), do you alter the data and obfuscate it to remove any identifying information. Make it “safe” data that can’t be used to somehow compromise your production system.
It’s a good practice, and one that I used to follow at a couple companies. I didn’t have any tools, but I did write scripts, load a few base tables of names, and then run those scripts as part of the restore job. They would randomly reassign new names to people, companies, addresses, etc. We would also redo phone numbers in sequential orders (555-555-0001, 555-555-0002, etc), and even randomly add products to sales or amounts to financial figures. It wasn’t perfect, and if you worked on the production system a lot you could guess which people were which, but it worked well for testing and client demos.
I actually ran into a product recently (Camouflage) that does this and it’s a great idea. It’s something that quite a few companies should be implementing to ensure that their non-production systems are that much more secure.
This editorial was originally published on May 5, 2011. It is being republished as Steve is at the PASS Summit.
What’s the time for you IT department to get from great idea to a resulting application? This is a very good piece from CIO magazine that finds many IT departments are seen as too slow. However there are a number of companies that are trying to innovate and find ways to increase the speed at which IT departments can deploy an application and respond to a business need.
One great quote in there is “velocity is more important than perfection”, which is a tenet that I have found to be very true over the years. It’s not that you throw junk out that isn’t well built or tested, but that you don’t try to meet every possible requirement or handle every little issue. The system has to be secure, handle errors, and meet the basic requirements, but it’s more important to get something done and in production than to have it perform and scale perfectly.
Is that heresy to the developers and DBAs out there? Perhaps, but I think this methodology has to go hand in hand with another mantra I heard fromJason Fried: do more of what works and less of what doesn’t. In this case if a system shows promise and starts to get heavy use, it receives more resources and perhaps gets refactoring in real time, even as it gets enhanced with new ideas.
“You want IT to be in constant test-and-learn mode”, another quote showing that IT needs to be working closely with the business to try ideas, learn from them, and move forward. The Agile style of development applies, and in some sense I think this is the future for the strategic IT department of the future.
For the data professional this means that you must learn to model quickly, and with an eye towards a flexible design that might need to change regularly. We need to understand the businesses we work in better so that we can anticipate how requirements might change.
Management has to buy into the idea that applications will not be perfect, they won’t be polished, and most importantly, they are essentially prototypes that either need to have addition resources spent on enhancements or they should be abandoned quickly. However I think this is a great way to develop internal applications that can provide a nice ROI, and be a more enjoyable way for developers to work.
I wrote The Age of Software awhile back and noted that supporting previous versions of software isn’t necessarily a good use of resources for development teams. I especially think this is true of SQL Server. But does that mean we should abandon aging software platforms?
It’s a tough question. I’ve certainly talked about the case for upgrading, and the reasons why you might not. For any particular instance, however, I think that each of you has to make the case about whether the software works, or it doesn’t.
If it works, then it seems many of us will live with the old software and keep it running. As late as a few years ago I knew a company running SQL 6.5 with a piece of software built in 1996 and last patched in 2001. However this software ran a building key card system, and there wasn’t a good case to be made for upgrading.
For a software developer, however, when you look at aging pieces of software, even those that customers may pay for support on, is it worth maintaining skills and support? If you don’t have staff turnover, then perhaps. If you don’t, I do think that it might be time to let the product die.
I’m torn on the way we deal with software in our world. On one hand, I’d like to see customers given source code for end of life platforms in order to support themselves if they wish. On the other, I understand the IP concerns, and business case to let software die.
Ultimately I’m mostly OK with the current way most vendors support software. If it works for a decade and support ends, I can continue to use it. Until it doesn’t work, and then I am glad that most vendors have an upgrade for me.