I really wish that we would see detailed analysis of failed systems, if for no other reasons than to avoid repeating the same mistakes in the future. I’d hope that technical people that were engaged in building systems that didn’t work well would share their experiences, whether from development, deployment, administration, or even operations. I ran across a piece on the lessons of Orca, the web application that Mitt Romney’s campaign used, or tried to use, to manage their operations.
It seems that there were a number of problems with this system, which is almost stunning. I’d think this is a well known process that includes a number of pieces of technology that are built into so many systems these days. Integration is never smooth, and the short time frame of an election campaigns doesn’t leave a lot of time for testing, much of which apparently didn’t get completed. The article talks about many of the same things that I’ve seen mentioned in the past when applications don’t work as expected. A lack of training, a dearth of hardware, tooling that doesn’t work, all of these have been reported for years in many software engineering journals and articles.
Perhaps more analysis won’t help. I doubt that even a high profile failure would convince the manager of any internal software development project to spend more resources or lengthen their time line to prevent a problem with an application. Like most developers, managers are eternal optimists when it comes to software being completed, regardless of their past experience. They never seem to learn that pushing for faster releases, cutting features, and limiting testing will showcase the end product poorly for the customer.
The Voice of the DBA Podcasts
We publish three versions of the podcast each day for you to enjoy.