Is Skynet Coming?

That’s the question this week: Is Skynet coming?

I read a piece recently that noted Stephen Hawking, Elon Musk, and Bill Gates have warned us about artificial intelligence. They have been quoted as this is potentially something that the human race needs to be cognizant of. However, I wonder.

I know what IBM’s Watson does is amazing, but is it intelligence? Is it anywhere close to sentience? Or is it really just pattern recognition and matching with facts? I think more of the latter, and I’m not sure we’re moving closer to a computer intelligence.

I guess there are narrowly defined domains where computers seem to be improving their capabilities, but it seems to me that these areas are defined by the programmers and the systems are tailored to a specific ability.

I do agree with the article that combinations of massive computing power and humans will make fundamental changes in the world. I think many, many jobs are potentially going to be lost and workers dislocated because of the ability of computers to do many jobs that humans perform today. I don’t have any solutions here, but I am glad I work in technology.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 1.6MB) podcast or subscribe to the feed at iTunes and LibSyn. feed

Smart Gadgets

I saw this post on what a smartwatch should look like and it got me thinking. I’ve got a Kreyos watch coming (I hope) and it fulfills some of the things I was looking for, and a few that the Pebble didn’t.

However I wonder where we’ll go with gadgets in the future. My son and I sometimes dream, trying to decide what we’d like to see built in the future. One of the main things we talk about is how we might wear things. A bracelet of some sort seems to be the consensus. Getting something larger that we can see better, but more convenient then a smart phone. Already I think the 5″ ones are a bit large and cumbersome to use. One of my complaints about the Galaxy S4 was that I couldn’t easily use it with one hand.

Glasses are an interesting idea. Google Glass has some good ideas, though I think the idea of including the camera was one that caused a lot of concern from people, and rightly so. Even though cell phone cameras haven’t caused too much of a problem, there certainly have been privacy incidents. I think I’d like the idea of getting some information in my field of vision, and I’d like to try some “smart glasses” at some point.

I found this post on gadgets out there, and most of them aren’t revolutionary and a few aren’t interesting. A USB drive in cuff links or a necklace? Doesn’t seem that interesting to me. What I’m looking for is perhaps some type of mix of gadgets that can work together.

Could I get a phone that connects with various devices that can run for a long time? Perhaps a belt buckle that provides power and also tracks movement like my Fitbit flex? It can connect to my watch as well, providing me with information from a device in a pocket. Ultimately I’m not sure what would be enhance the phone much. Any jewelry might not provide much more than notification of information through vibration, or possible light, and you’d still need a device.

Ultimately I think I’d like to have more ways to connect devices together. Easily get my phone to project on a larger monitor when it’s available, like in the kitchen or car, perhaps even using a camera there to communicate with others.

My hope is we’ll get gadgets that communicate, but with the separation of platforms and the reluctance for many to work together on common APIs, I’m not sure we’ll get there.


Most of us that work with technology hate downtime. We don’t want a system that we’re using to go down. We don’t want any software that we depend on to fail when we need it. Most of all, we don’t want our phones ringing because some system we’re responsible for has gone down. We do everything we can to keep our applications online. We avoid patches. We try to test as much as possible before deploying changes. We also may apply generous amounts of hope and prayer.

However that’s not how all companies run their internal systems. Netflix has taken the opposite approach, actually creating downtime for some of their systems using what they call a “chaos monkey,” and they think it could help you. To be fair, Netflix doesn’t take their entire application offline, but they do cause failures in the hardware and software, specifically to see if their redundant and scaled-out architectures can limit the impact on users.

It’s an interesting idea, though one that I’ve not seen many companies be willing to implement. Netflix thinks you could benefit from it, but they also run a series of services that are scaled our across many machines. Many companies I’ve worked with have services on one machine handling an application, and they accept the risk that a system might fail and users will experience problems. Given the quality of modern hardware, that might be a good bet to place these days.

However more and more of us are running redundant systems for some applications. If you think the Chaos Monkey could help you, let us know.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.1MB) podcast or subscribe to the feed at iTunes and Mevio . feed

The Voice of the DBA podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at


This editorial was originally published on April 23, 2009. It is being re-run as Steve is on vacation.

Data quality is an issue in many of our systems that rely on humans for data entry. Even if you only import data from other sources, how reliable is the data that exists in those systems?

I found an interesting article on computer interfaces and it highlights some of the ways that have been designed to work with computers and put data into, as well as get data out of, computer systems. Most of these are probably familiar to you and there’s nothing really groundbreaking in the article.

It had me thinking, however, of whether or not there’s a better way to input and extract data from systems. I’ve been watching Star Trek: The Next Generation on DVD lately, and they primarily use voice recognition, but I see all kinds of flaws in that. My wife has worked with a lot of speech technology, and her current company makes the Dragon speech recognition software. I asked her if she thought it was good. She did, but it paled when compared to a good, old-fashioned keyboard. The rate of input and the few mistakes she makes typing far outweigh the benefits of using speech. Perhaps that will improve in the future, but I wonder if it will ever really supplant the keyboard.

Multi-touch has gained a lot of notoriety lately, especially in presentations, but I’m not sure it’s a great way to get data out of a system for a single user. There are other possibilities, but in anything we develop, the rate of input as well as mistakes, are things on my mind. As a data guy, I am entirely too aware of how much work it is to clean data later.

Managing, storing, and securing data all is great, and you can do the best job in the world, but if users can’t easily access and use the data, is it valuable? If it’s not correct, is it useful? The way we work with computers will likely evolve, and while I’m not sure what will work best, I do know that us DBAs will always be in demand to ensure the data is as correct as it can be.

Steve Jones

The Voice of the DBA Podcasts

Everyday Jones

The podcast feeds are available Comments are definitely appreciated and wanted, and you can get feeds from there.

You can also follow Steve Jones on Twitter:

Overall RSS Feed:  or now on iTunes! 

Today’s podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

Sensors and Data

The programmable world. It’s an interesting concept, but not one in which we have extremely detailed specifications and strongly bound software systems that must be built to interact with each other. I think many of us have assumed that’s how we would enable the further computerization of our physical world. As an example, we’d have cars that communicated with the road, with other cars, with semi-central authority(ies) that might be managing our interactions, all of the infrastructure pre-built.

However that isn’t necessarily what will happen. In this O’Reilly piece, the author notes cheap sensors can read the color of lights in the same way that humans or my Lego Mindstorm can detect color. Traffic sensors need not communicate with cars; these sensors could instead just identify cars and count them, measuring their speed and adjusting traffic lights based on actual conditions. Imagine the future when more and more sensors gather their own data and make decisions.

In a sense, this is how the Google self-driving car works. Rather than depend on infrastructure and external programming, the car gathers its own data and adjusts its behavior depending on the interpretation of the data. Whether or not you like the idea of self-driving cars (I do), the idea of an autonomous device acting based on programming and a large amount of data is a fascinating move forward in computing.

I think this shows that data gathering, processing, and analysis will become a more important part of our future computing worlds. Some of this data will be transient and discarded, but lots will be stored. We’ll use data to ineract with the real world immediately, but we will also perform analysis later and reprogram our devices to operate better in the future. That means there will be new, and more, opportunities for those of us working with data.

Steve Jones

Video and Audio versions

Today’s podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

Follow Steve Jones on Twitter to find links and database related items and announcements.
Steve Jones Windows Media Video ( 16.5MB) feed

MP4 iPod Video ( 20.0MB) feed

MP3 Audio (4.0MB) feed

Feeds are available at iTunes and Mevio

To submit an article, rant or editorial,
log in to the Contribution Center

The Human Touch

How often does human error cause issues? Recently we had a rocket crash in Russia,there have been numerous incidents of drone crashes as more and more unmanned aircraft take to the skies, and a few years ago we had an Air France disaster that might have been cause by humans making poor decisions or engaging the wrong controls. Those are incidents where the wrong button press has large consequences, either in physical damage or the loss of life.

Many of us make mistakes constantly as we work in the various tools and environments we need throughout our day. We click the wrong button in SSMS, we connect to the wrong server and run a script, or we fail to test a change. All of these are mistakes made by humans, and often are mistakes that can be prevented if we did a better job or sticking to routines and processes. That can be hard, but perhaps checklists can help here, along with some double checks by coworkers.

That’s why I think using scripts in T-SQL, and using the Script button in SSMS to generate the code that you can run (and save) is the best way to work with your servers. As much as I think Powershell (PoSh) can be a pain to write and debug, there’s a good argument to be made for using it when performing complex administrative tasks, especially across servers. Using code rather than forms and buttons is just a better way to accurately and consistently make changes in a controller manner.

This is an area where everyone could work more efficiently. Developers are usually used to working with version control systems and tracking all their changes. However they often will make configuration changes to their machines, SQL Server, IIS, or some other software, and forget to track the changes. Making these changes in code, through T-SQL or PoSh, and tracking these items in VCS, would help with smoothing software deployments. DBAs and other operations staff should learn to use version control systems  as well, helping to track down root causes to issues.

Ultimately humans are often the weaknesses in most systems. We should understand that, accept it, and compensate as best we can.

Steve Jones

Video and Audio versions

Today’s podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at

Follow Steve Jones on Twitter to find links and database related items and announcements.
Steve Jones Windows Media Video ( 15.5MB) feed

MP4 iPod Video ( 19.7MB) feed

MP3 Audio ( 3.9MB) feed

Feeds are available at iTunes and Mevio

To submit an article, rant or editorial,
log in to the Contribution Center

Data Vision

Will computers "see" more in the future and help us with new tasks?
Will computers “see” more in the future and help us with new tasks?

One of the things that I often do when analyzing data is examine visual representations. I don’t ignore data, but often a graph or picture of who the data is distributed or organized gives me a starting place for more details examination of the actual numbers. It has worked well for me and going back and forth from numbers to graphs allows me to better understand the ways in which patterns and tendencies are embedded.  Humans are visual, and we are expanding that capability more and more to machines.

Years ago I worked for a company that imported wood and sold it in the US. In one of our plants we sorted and organized wood pieces using a combination of human and machine efforts. Conveyor systems moved wood along and humans examined the individual piece as they traveled by. Using chalk marks, they were able to mark items as better or worse quality than others. A computer system scanned for chalk marks and was able to separate the wood more effectively and efficiently than in the past.

That type of human and machine analysis is being used more in many industries. However instead of using humans to do analysis, computer systems can now do that in some cases. I ran across a piece about software that examines physical parts from a manufacturing process, looking for defects. The computer system can do a better job, faster than most humans. There might be a verification step from humans, but for many parts, this means a higher level of both quality and productivity.

I could imagine this type of computer examination has a place in data systems as well. We could train an algoirthm to look for patterns in a visualizaiton, and perhaps highlight them for more examination by a human. We could even have some secondary systems that examine error outputs from something like an ETL process and direct developers to the potential issues in their logic, or in the source files, pointing out data or formatting problems. A little more checking, or real time testing, might help improve the overall quality of our processes, whether in the real world or embedded in software.

Steve Jones

The Voice of the DBA Podcasts

We publish three versions of the podcast each day for you to enjoy. Due to technical issues, we have no podcasts today. Actually, Steve’s camera died and he is in the process of replacing it. Hopefully we’ll be back tomorrow with the podcast.

Data Will Drive the World

Software, and data, will eat the world.
Software, and data, will eat the world.

There’s a well known essay from Marc Andreeson that talks about howsoftware is eating the world. There’s a lot of truth to this, in my opinion, and it becomes very important for more and more people to realize that software is going to become more and more of a part of their lives, all parts of their lives. Whether in business, in your personal life, in government or anywhere else, software is going to increasingly be used to interact with the world. This will bring about many opportunities for people in technology to help shape the way those interactions affect all of our lives.

However it’s not just software that’s important. The data that drives this software is arguably more important than even the software. This data drives the software algorithms to produce some result or action. Hacking the data to change values can even change the results from the software. Different software algorithms might interpret or react to data differently. If data isn’t more important, it’s equally important as the software that processes it.

We are seeing the additional recording and gathering of data allowing all sorts of new software to be written. Some of the software is performing jobs that humans have do in a faster and perhaps more efficient way. Other software is helping people understand the world in a way they might never have seen it. We even have software replacing other  software jobs. That feels quite surreal to me, but it’s made possible by more and more data being available to help guide the development and growth of systems.

We are in a good business for the future, with the growing needs to process, manage, and manipulate more and more data in the future. It’s up to us to learn the skills we need to do this efficiently, and in ways that can’t easily be automated.

Steve Jones

The Voice of the DBA Podcasts

We publish three versions of the podcast each day for you to enjoy.

When will we look back at this?

It’s amazing how software has advanced. I look at the software I use, and I’m amazed. I still remember texting to 40404 for basic twitter usage on my Windows 6.5 Mobile. That seems miles away from the apps I know use on my current iPhone 4S. I almost can’t comprehend what my life would be like if I didn’t have a smartphone now.

I saw this video of a HoloDesk from Microsoft Research. It’s neat, and I can imagine the possibilities of what might come, but for not it’s very basic.

It makes me wonder when we’ll have something more like this interface:

Trusting Systems

What's the cost of this?
What’s the cost of this?

Computer technology has become more and more integrated into all sorts of businesses. These days when I look at the ways in which automation and technology is embedded in business, I’m amazed to think that I worked in businesses that didn’t have any personal computer systems early in my life. Unfortunately this integration comes at the a price: dependence. We depend more and more on our systems working as expected for businesses to continue to operate.

Most businesses can tolerate some issues and failures in their systems for a short period of time. As much as I want to maintain 5 9s of uptime, I’ve never been in a business that really required that level of stability. I’ve had systems go down that management claimed were critical. They’ve been down for a couple days, and the companies didn’t go out of business. We have a lot more tolerance than management would like to admit. If you work with a 24×7 environment like Amazon’s online store or a banking application that is generating revenue every second, that might not be true, but for most of us, in most organizations, for most systems, we can tolerate some downtime.

However the effects of some failures can’t easily be measured. American Airlines had computer issues recently and had to delay or cancel 400 flights. I’m sure there were some costs in compensating customers that took other airlines, or received vouchers for services. However, most of the flights were moved to later in the day, and I doubt there were many refunds. The short term costs of this computer issue were probably relatively small. Given the fact that some customers might have skipped or moved flights, with a change fee, there might not have been any absolute short term loss.

However it’s impossible to know how many other businesses were affected. Sales calls that were missed and not rescheduled, people that missed events that were important to them and they’ll never be able to attend, or even penalties paid on contracts for the failure to deliver something in person. What’s also impossible to know is how many people will reconsider booking flights with American in the future because they lost their trust in that company.

That’s really the bottom line for most of our management and our businesses. If we can’t provide reliable computer systems, management won’t trust us, and their customers won’t trust them. If that happens too often, we won’t be employed, and at some point in the future, we might find ourselves unemployable if our reputation is such that we can’t keep systems running.

Steve Jones

The Voice of the DBA Podcasts

We publish three versions of the podcast each day for you to enjoy.