Python Command Line Calls

There was a time I worked in a company any we lived in the command line. This was in the early 90s, prior to Windows v3.1 being released and we primarily used DOS on a Novell network.

We also had paper phone books on every desk for the 1,000+ people on the property. However, as you might guess, the phone books went out of date, and were only updated a couple times a year. We might get a new sheet, but mostly people learned to cross out a name and write a new extension in for the people they dealt with regularly.

However updates to the master list happened regularly, every few days. These were done in a small Lotus 1-2-3 file that an administrative assistant maintained. As a network person, I knew where this lived, and arranged a text export of this every night with a scheduled task.

Why text? Well, I’d come from a university where we had our phone details in a text file and would grep the file to find information. In DOS, I knew we could do the same thing with FIND. However, rather than write the find command with parameters, I put a batch file on our global share that called FIND with the parameters needed and the path to the phone book. I called this 411.bat. When I needed a phone number, I could type

411 “Andy Warren”

I’d get Andy’s name, location, and phone number back. It was a trivial piece of programming for me, but the rest of the network team, all non-programmers, were thrilled. I even added a /? check to return help information to the user.

With my playing with Python last week, I decided to do this for myself as well. I took my Python program to send tweets and changed it to send a tweet when the program was called, and to send the parameter as the tweet. The code looked like this::

import sys
import tweepy

def send_a_tweet(tweettext):
consumer_key = "X1GWqgKpPP4OuR1XNqWJZ7hw6"
consumer_secret = "QW3EkMHlyzFxytHOxQr5mEy69AHn8DjyWyRG5CAQ0wjK9RqUZ2"
access_token = '14607509-MqpeZFsljo0JzS0VGgTSik1fq5klvJpqc1x6HAsiu'
access_token_secret = '7wIZlLHEv1PbIqXtczc2LOsJgjMP3dCRRw5ajMvkjEspF'

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)


if __name__ == '__main__':

I placed this in a “\tools” folder that I have in my path. I also added a “tweet.cmd” file in this folder with this code:

python c:\tools\ %1

Since Python.exe is in my path as well, I can do this:

2015-11-06 15_25_19-Command Prompt

And I’ll see this on my timeline. I guess you’ll all be able to see this as well.

2015-11-06 15_25_26-Steve Jones (@way0utwest) _ Twitter


Why bother? Well, it was partially playing around. As I have been learning Python, I have mostly been playing in an IDE, solving small problems, but not really doing things useful. I also like the idea of command line tools, since I find them quick. Tweetdeck is big and bloated, and if I wanted to send a tweet from my desk, this is a quick way to do it. I could do a “readtweets” as well, and may.

However I also learned how to call Python programs with a command line, which is a good step to starting to build more useful programs that I can customize. This is also the start of me being able to schedule a program, and perhaps build more automation into my life with Python.

Mostly, however, it was just fun.

Visual Studio Subscriptions

Many of us that work with SQL Server do so exclusively through SQL Server Management Studio (SSMS). I find so many people really do the majority of their jobs with SSMS, Outlook, and a web browser. Even back in 2003 when I was a full time DBA, I probably spent the majority of my time in those three applications.
However I also see more and more people using Visual Studio and other tools to accomplish their jobs. The growth of new tools, like Powershell, the expansion of our work into BI areas, and more mean that more and more people are using tools besides SSMS to work with SQL Server data.
This past week there was an announcement that MSDN subscriptions were changing. At most of my jobs, I’ve had an MSDN subscription available to me. In fact, some of you might remember the large binders of CDs (and later DVDs) that arrived on a regular basis and contained copies of all Microsoft software. However many of you out there haven’t had MSDN available to you, or you’ve struggled to justify the yearly $1000+ cost, but you do want to work on your careers and practice with Microsoft software.
At first I saw the yearly cost of MSDN at $799, which is a pretty large investment. However as I looked to the side, I saw a monthly subscription, no large commitment, available for $45. That’s not an extremely low cost for much of the world, but it’s very reasonable in the US. It’s also a great way to build a setup that allows you to work with a variety of Microsoft technologies at an affordable cost. What’s more, you can stop paying at any time. Or start again at any time.
I know that it can be a struggle to invest in your own career, probably more difficult to find time than money. However this is a good way to get access to the various development and server tools for a period of time if you want to tackle a project or force yourself to learn a new skill.
I’m glad that Microsoft has moved to a subscription model for MSDN. I expect to see this subscription growing as small companies use a small investment that scales linearly with new hires to provide their employees with tools. I can only hope that many other vendors adopt this same model and allow us to rent our tools, and upgrade, for a very reasonable cost. I just hope they all let us backup and save our settings in case we interrupt our subscription for a period of time.
Steve Jones

Technical Debt

I was speaking with one of the development teams at Redgate earlier this year. They were working on a product, and had planned out a few sprints worth of work. Each sprint, called a train, was a couple weeks long, with specific goals and ideas to be implemented. That was all good, but I noticed that there was a sprint in the middle of the list that was devoted to technical debt.

Technical debt is a strange term. It’s one that many managers don’t understand well, often because the code may work fine. I ran across an interesting piece that looks at what the concept means, with what I think is a good explanation. We get technical debt when we sacrifice maintainability to meet another requirement. The piece also looks at the accumulation of the debt and why it becomes problematic later. Certainly the more debt that piles up, the mode difficult it can be to change code. Since we are almost always going to go back and maintain code, this becomes a problem.

I think the ideas given to keep technical debt under control are good ones. We should make an effort to clean code as we can, though not make it such a priority that we end up causing more work with constant refactoring. We do need to get work done. However the suggestions given require a good amount of discipline and buy in from management, and I’m glad Redgate tries to keep debt under control. I think our developers like the debt trains as well.

I thought the idea was pretty cool until I was looking for a feature to be completed and the technical debt train was running that week. I thought about complaining, but I decided to have patience and wait a week. After all, if the debt isn’t kept under control, I might be waiting much longer than a week for some fix or feature next year.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Python and Tweepy

One of the projects that’s been on my list lately is to programmatically access Twitter for a few ideas I want to play with. Since I’ve been trying to learn some Python, I thought I would take a look using Python to update status and read status.

A quick Google search showed me lots of Python clients, but Tweepy caught my eye for some reason. I’m not sure why, but I ended up popping a command line open and downloading the library.

From there, I saw a short tutorial over at Python Central. I started by creating an app at Twitter for myself, which was very simple. Once that was done, I had a set of consumer tokens (key and secret), that I could use. Another click got me to the access key and secret. Note, the easy way to do this is over at

My first attempt was using this sample code.

import tweepy

consumer_key = “ABCDE”
consumer_secret = “12345”
access_token = ‘asdfasdf’
access_token_secret = ‘98765’

auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

api = tweepy.API(auth)

public_tweets = api.user_timeline()
for tweet in public_tweets:

This gives me a list of my tweets. At least, the last 20.

2015-11-06 13_30_24-SDTIG - [C__Users_Steve_OneDrive_Documents_Python_SDTIG] - ..._Last_10_twitter_l

That’s progress.

I then went to make an update by using this:

api.update_status('Hello, Pyton')

However that returned an error:

2015-11-06 13_50_53-Cortana

Hmmm, I double checked the API, and this should work, but I’m guessing there’s another issue. I searched, and found there’s a bug. However, I should be using named parameters, so no big deal. Change the code.

api.update_status(status='Hello, Pyton')

Now it works.

2015-11-06 13_53_06-Steve Jones (@way0utwest) _ Twitter

This is the first step for me to look at building an app that might send some tweets on my behalf, perhaps with the data stored somewhere, like, I don’t know, maybe a database?

Rogue Software Changes

Could a group of software developers make changes that fundamentally alter the way a software system should work without management being aware?

That’s the question being asked of VW right now. Most people are skeptical, but I ran across a piece that wants to lend credence to the idea that a few software engineers acted with few people being aware. They did this, not because they wanted to defraud everyone, but they wanted a solve a problem that they couldn’t do in other ways. They also didn’t see the alteration of test results as much of an issue because they thought the tests were too stringent.

I’m not sure I believe that’s what happened. Certainly there is some disagreement from various parties, but with my experience in software projects, management always wants to know how things are proceeding, with more and more questions whenever the applications don’t work as expected. When problems are solved, natural human curiosity leads more managers to ask for details, even when they don’t understand. In this case, I can’t imagine lots of VW management weren’t aware that software was being used to pass tests. Many people report to many others, and everyone would have wanted to know how VW solved this engineering problem.

The stakes for organizations will continue to rise in a global economy, and software will play increasing roles in many companies. Will we see more and more pressure to manipulate our world with software, even in criminal ways? I suspect so, and I sympathize with those that might face the loss of employment for not complying with the requirements they’re given.

Ultimately I think transparency of software is the best way to bring about better software that complies with regulations and rules. Transparency also ensures that copyrights aren’t violated (since violators code is available), and we can determine if security is being built into systems. Perhaps best of all, developers can all learn from each other, seeing exactly what works and doesn’t in each system.

I doubt we’ll get there, but transparency would be a nice place to be.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.7MB) podcast or subscribe to the feed at iTunes and LibSyn.

Do You Have Scary Code?

I once worked in a company that had a VB6 application (this was a long time ago), which had been mainly written by three developers working at the company. Two of them left, but we still have one of the original developer and five or six others that had worked on the application for a year or more.

One day we were discussing changing a section of the application to add functionality. I was surprised to find that none of the developers wanted to work on the code. They were all “afraid” to make changes. Having been a developer and spent time digging through other people’s code, I was surprised. Certainly some tasks are difficult, but being afraid to change code?

I wish I’d been more knowledgeable then. Today I’d tell the developers the first thing they need to do is write tests. They need unit tests, or integration tests, but they need some way to determine if they are breaking functionality.

And if they do break something, that’s fine. Go fix the breakage. Refactor other code, write more tests if they are needed, and go for it. You learn by breaking things. Your tests protect you and let you refactor code. As much as I realize we don’t want to spend unnecessary time writing tests, we need something to examine our code as we write. We might as well use a testing framework to help. That way we’re not afraid to change the existing application.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.2MB) podcast or subscribe to the feed at iTunes and LibSyn.

What is the DLM Maturity Model?

I’ve written a few posts on the Redgate Software blog to try and show how I see the DLM model, and how we see things at Redgate. We have a lot of developers that work in a similar way when building application software in C#, Java, Python, or other languages, and much of the company is trying to bring more engineering to database development.

Part of what the DLM maturity model aims to do is help us classify how we progress to a more engineered, repeatable, and reliable way of managing database development. You can read my overview, and then dive into each of the various levels we’ve built. The levels are:

Some of this is based on the CMMI model from the SEI, and some is based on what application developers are doing with their own continuous delivery mode. Simple Talk has written a What is DLM? article as well, and includes a different view of a maturity mode.

I think this process becomes more important over time as we depend more on software and the databases behind them, with fewer and fewer tolerances for downtime or human mistakes in the deployment process.

I’d like to get feedback from people on what they think of this model, and of the idea of engineering better database development. I know many people have built their own process, but far too many of the processes rely on custom scripts that are built and edited for each deployment, sometimes in the middle of the deployment. I think we could actually make database development better if we applied some better structure to our deployment.

Redgate is working on tools to support this, in a few ways, but this isn’t about Redgate. Rather, it’s about building better software for everyone, whether you use Redgate tools, another vendor’s tools, or build your own. Follow a better engineering process.

Leave Developers Alone

This editorial was originally published on July 25, 2011. It is being republished as Steve is at the PASS Summit.

A computer deals with interrupts all the time. They are the mechanism by which it can simulate multi-tasking among many different programs in a modern operating system. However those interruptions have a price, and too many of them can affect performance. As hardware grows larger, we have other issues from interruptions that we try to mitigate with techniques like soft-NUMA affinity or multiple pipelines built into hardware. All of these are designed to prevent a computer from spending any significant time on non-productive tasks because of interruptions.

In the real world, many of us deal with regular interruptions from work. They might be emails, instant messages, phone calls on a cell phone, or the old fashioned someone-stopping-by-your cube-to-chat. All of these things add up to less productivity, especially for developers. One study finds a 10 point IQ drop from regular email and phone interruptions. I don’t know about you, but I’m not sure I can afford a 10 pt drop in IQ when I’m working.

Some companies are starting to realize that developer’s brains are a scarce resource, and interrupting them can dramatically impact productivity. I have found some places, like this one, that are setting aside quiet time for developers to work without being bothered. Similar to the technical debt that Steve McConnell has talked about, there seems to be an interruption tax that some development shops are loathe to pay.

Even if you don’t gain any productivity, or have fewer bugs, or ship more often, I think that your developers will appreciate it. It could be an easy way to increase happiness, improve retention, and even sell your company as a good place to work. And it’s easy to implement: just leave people alone a few hours a day.

Steve Jones

Masking Data

This editorial was originally published on Mar 7, 2008. It is being re-run as Steve is out of town.

I saw an interesting thread awhile back where one of our very talented community members was asking about how to go about altering data in an application for a demo. It’s a valid scenario and one that I’m sure many people have run into at some point in their career. You want to show data that’s somewhat real so that it showcases the application and what it can do, but you don’t want to show real names, amounts, or any identifying information.

A bit of a quandary and it seems that many people solve it in one of three ways. They just use test data, which is a very small set of data and doesn’t show as well. Or they may alter everyone’s name to something like “Steve Jones”, all phone numbers to 555-555-5555, etc., which looks funny.

Or you just show the production data and wink and say “I don’t usually do this, but since you’re such a valued client…”

So for the Friday poll: Do You Alter Production Data When It’s Copied?

Meaning when the data gets moved to a non-production system (demo, test, development, etc.), do you alter the data and obfuscate it to remove any identifying information. Make it “safe” data that can’t be used to somehow compromise your production system.

It’s a good practice, and one that I used to follow at a couple companies. I didn’t have any tools, but I did write scripts, load a few base tables of names, and then run those scripts as part of the restore job. They would randomly reassign new names to people, companies, addresses, etc. We would also redo phone numbers in sequential orders (555-555-0001, 555-555-0002, etc), and even randomly add products to sales or amounts to financial figures. It wasn’t perfect, and if you worked on the production system a lot you could guess which people were which, but it worked well for testing and client demos.

I actually ran into a product recently (Camouflage) that does this and it’s a great idea. It’s something that quite a few companies should be implementing to ensure that their non-production systems are that much more secure.

Steve Jones

From Great Idea to End Result

This editorial was originally published on May 5, 2011. It is being republished as Steve is at the PASS Summit.

What’s the time for you IT department to get from great idea to a resulting application? This is a very good piece from CIO magazine that finds many IT departments are seen as too slow. However there are a number of companies that are trying to innovate and find ways to increase the speed at which IT departments can deploy an application and respond to a business need.

One great quote in there is “velocity is more important than perfection”,  which is a tenet that I have found to be very true over the years. It’s not that you throw junk out that isn’t well built or tested, but that you don’t try to meet every possible requirement or handle every little issue. The system has to be secure, handle errors, and meet the basic requirements, but it’s more important to get something done and in production than to have it perform and scale perfectly.

Is that heresy to the developers and DBAs out there? Perhaps, but I think this methodology has to go hand in hand with another mantra I heard fromJason Fried: do more of what works and less of what doesn’t. In this case if a system shows promise and starts to get heavy use, it receives more resources and perhaps gets refactoring in real time, even as it gets enhanced with new ideas.

“You want IT to be in constant test-and-learn mode”, another quote showing that IT needs to be working closely with the business to try ideas, learn from them, and move forward. The Agile style of development applies, and in some sense I think this is the future for the strategic IT department of the future.

For the data professional this means that you must learn to model quickly, and with an eye towards a flexible design that might need to change regularly. We need to understand the businesses we work in better so that we can anticipate how requirements might change.

Management has to buy into the idea that applications will not be perfect, they won’t be polished, and most importantly, they are essentially prototypes that either need to have addition resources spent on enhancements or they should be abandoned quickly. However I think this is a great way to develop internal applications that can provide a nice ROI, and be a more enjoyable way for developers to work.

Steve Jones