Skip to content

Choosing Your Tasks

This editorial was originally published on May 13, 2011. It is being re-run as Steve is out of town.

I have been working for SQLServerCentral for nearly a decade. I started in 2002 and in that time I have had to define my own job most of the time. Early on Andy and Brian had a list of things they thought I should be doing, and there were certain things to get done each week, but it was a general list.

This Friday I thought this was a topic for a good poll. Answer this question:

What percentage of your time is self-directed?

By self-directed I mean the tasks that you choose to do because you think they need to be done for some reason. This is opposed to the specific tasks that someone assigns you and gives you some deadline for finishing. If someone asks you to “tune the server”, I don’t consider that a specific task, and you would have to pick items to work on, and determine how much time you spend on them, that’s a self-directed task.

Do you have a good amount of self-directed time? When I used to manage a series of production servers, I usually had at least half my time as self-directed time. I could look at poor running queries, contact groups with proactive ideas for improving performance or preventing problems. I even had time to schedule DR testing. That took a lot of investment over time. I had to understand each system, set up monitoring and standards, build in data capture and analysis routines, and of course, plenty of alerting mechanisms.

I think that a great production DBA will have a lot of self-directed time after 6-12 months on the job. A development DBA, however, will likely be constantly responding to code requests and enhancements, which is one reason I prefer the former job.

Steve Jones

A Long Trip Ahead

This is my last day at home for a long time. At least long by my standards. I head to the airport tomorrow for a ten day trip, not returning to CO until Saturday, Oct 17. I rarely travel more than 4 or 5 days at the most, so this is one of my longer ones.

My first stop is Orlando. I’m heading over to help teach a DLM workshop for Redgate Software on Friday. This is our Database Source Control workshop that covers some in depth work with SQL Source Control and version control systems. I’ve done a few of these, so this should be easy for me.

Saturday is SQL Saturday #442 in Orlando. I haven’t been to a SQL Saturday in Orlando in a long time, so I’m excited to get back to the place where these all started. I’ve got one talk on Saturday, talking Encryption, around which I’ll be hanging out with friends and trying to learn a few SQL things along the way.

Sunday I travel, though at a relaxed pace. I’ll spend the day making my way to Houston before an overnight flight to London on the Dreamliner. It’s a leisurely day, where I’ll probably spend time catching up on Python work because Monday is crazy.

Monday is a day I dread a bit. I land in London and immediately drive to Cambridge for a few meetings. I’ve got some SQL in the City rehearsals planned before I turn around and head back to London to catch the fun bus to Bristol for SQL Relay. If you map this out, it seems silly, but that’s what I got myself talked into somehow.

Tuesday is SQL Relay in Bristol. I’ll be previewing my talk for SQL in the City, so I’ll apologize in advance if things aren’t 100% set. However after a day at the conference, I’ll be heading over to Cardiff where I’ll get dinner and try to fix all the things I did wrong during the talk.

Wednesday is SQL Relay Cardiff.  A repeat of Tuesday in a new city. I’m not sure if everything is the same, but I’ll be (hopefully) delivering a better talk on Wednesday. Wednesday night Grant and I aren’t doing anything, so it’s a few hours to unwind.

Thursday morning we make our way back to London. Hopefully we manage the train system fine because we have lunchtime and afternoon meetings with people coming down from Redgate during the day. This is the final SQL in the City prep time, as well as a few other in person events, including seeing my boss for only the 3rd time this year.

Friday is SQL in the City 2015 London. Redgate puts on a great event, and I’m looking forward to another exciting day. Three times on stage for me, so I’m sure when things wrap up around 5 I’ll be quite tired. However no rest, I head to Heathrow for a night in my 5th hotel on this trip.

10 days. Orlando, Cambridge, Bristol, Cardiff, London.

I have the feeling I won’t be doing much on Saturday night or Sunday when I return.

Aging Software

I wrote The Age of Software awhile back and noted that supporting previous versions of software isn’t necessarily a good use of resources for development teams. I especially think this is true of SQL Server. But does that mean we should abandon aging software platforms?

It’s a tough question. I’ve certainly talked about the case for upgrading, and the reasons why you might not. For any particular instance, however, I think that each of you has to make the case about whether the software works, or it doesn’t.

If it works, then it seems many of us will live with the old software and keep it running. As late as a few years ago I knew a company running SQL 6.5 with a piece of software built in 1996 and last patched in 2001. However this software ran a building key card system, and there wasn’t a good case to be made for upgrading.

For a software developer, however, when you look at aging pieces of software, even those that customers may pay for support on, is it worth maintaining skills and support? If you don’t have staff turnover, then perhaps. If you don’t, I do think that it might be time to let the product die.

I’m torn on the way we deal with software in our world. On one hand, I’d like to see customers given source code for end of life platforms in order to support themselves if they wish. On the other, I understand the IP concerns, and business case to let software die.

Ultimately I’m mostly OK with the current way most vendors support software. If it works for a decade and support ends, I can continue to use it. Until it doesn’t work, and then I am glad that most vendors have an upgrade for me.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.5MB) podcast or subscribe to the feed at iTunes and LibSyn.

Testing Sum By Month

I’ve been on a testing kick, trying to formalize the ad hoc queries I’ve run into something that’s easier to track. As a result, when I look to solve a problem, I’ve written a test to verify that what I think will happen, actually happens.

The Problem

I saw a post recently where someone wasn’t sure how to get the sum of a series of data items by month, so I decided to help them. They asked for a year number, a month number, and a total, so something like this:

Year   Month   Sales

2012       1   1500.23

2012       2   1480.00

2012       3   1945.00

2015       7   8933.11

They mentioned, however, that the had sales data stored as an integer. Not as 201201, but as 1, 2, 3, with a base date being Jan 1, 2012. That’s strange, but it’s a good place to write a test.

I like to start with the results, since if I don’t know the results, how can I tell if my query works? Let’s get a test going. I’ll start by created my expected results. I’ve come to like using temporary tables, and limited data. I also like to test some boundaries, so Iet’s cross a year.

CREATE PROCEDURE [tArticles].[test sum of sales by month for multiple months]
-- Assemble
CREATE TABLE #Expected (
yearnum INT
, monthnum TINYINT
, salestotal NUMERIC(10,2)

INTO #actual
FROM #Expected AS e


( yearnum
, monthnum
, salestotal
( 2012, 11, 2500.23 )
, ( 2012, 12, 2200.15 )
, ( 2013, 1, 2656.75 )

I like to create the actual results table here as well, which allows me to then easily insert into this table from a procedure as well as a query. In this case, I’ll use a query, but I could use insert..exec.

Once I have results, I need to setup my test data. In this case, I’d probably go grab the rows from a specific period and put them in a temp table and use Data Compare to get them. Or make them up. It doesn’t matter. I just need the data that allows me to test my query.

EXEC tsqlt.FakeTable @TableName = N'MonthlySales';

INSERT MothlySales
( 11, 1000.00)
, ( 11, 1500.23)
, ( 12, 2200.15)
, ( 13, 1000.00)
, ( 13, 1656.00)
, ( 13, 0000.75);

I don’t try to make this hard. I use easy math, giving myself a few cases. One, two, three rows of data for the months. If I think this isn’t representative, I can add a few more. I don’t try to be difficult, I’m testing a query. If I had rows that might not matter, or I wanted to test if 0 rows are ignored, I could do that.

Now I need a query. Something simple, a SUM() with a GROUP by is needed. However I need to also change 11 into 2012 11, so that’s an algorithm.

An easy way to do this is start with a base date. I’d prefer this is in a table, but I can do it inline.

INSERT #actual

yearnum = DATEPART( YEAR, DATEADD( MONTH, datenum, '20120101'))
, SALESTOTAL = SUM(ms.salesamount)
FROM dbo.MonthlySales AS ms
DATEPART( YEAR, DATEADD( MONTH, datenum, '20120101'))
DATEPART( YEAR, DATEADD( MONTH, datenum, '20120101'))

I’ll insert this data into #actual, which tests my query.

The final step is to assert my tables are equal.

-- Assert
EXEC tsqlt.AssertEqualsTable
@Expected = N'#EXPECTED',
@Actual = N'#actual',
@FailMsg = N'The calculations are incorrect';

The Test

What happens when I execute this test? I can use, or my SQL Test plugin.

2015-09-28 16_07_49-Photos

In either case, I’ll get a failure.

2015-09-28 16_08_11-Photos

When I check the messages, I see the output from tSQLt. In this case, none of my totals seem to match.

2015-09-28 16_15_23-Photos

What’s wrong? In my case, I’m adding the integer to the base month, but that means a 1 means 2012 02, not 2012 01. I’m a month off. Let’s adjust the query.

-- Act
INSERT #actual

yearnum = DATEPART( YEAR, DATEADD( MONTH, datenum, '20111201'))
, SALESTOTAL = SUM(ms.salesamount)
FROM dbo.MonthlySales AS ms
DATEPART( YEAR, DATEADD( MONTH, datenum, '20111201'))
DATEPART( YEAR, DATEADD( MONTH, datenum, '20111201'))

Now when I run my test, it passes.

2015-09-28 16_18_09-Photos

Why Bother?

This seems trivial, right? What’s the point of this test? After all, I can easily check this with a couple quick queries.

Well, let’s imagine that we decide to move this base date into a table, or that we alter it. We want our queries to continue to work. I can have this test as part of an automated routine that ensures this test will run each time the CI process runs. Or each time a developer executes a tsqlt.runall in this database (shared or populated from a VCS). I prevent refactoring queries.

More importantly, I can take results and alter them first, say if someone decides to change this to a windowing query. I could plug a new query in the test (or better yet, use a proc and put that call in the test) , and if I change code, I can verify it still works.

Write tests. You need them anyway, so why not formalize them? The code around this query, mocking test data, is something I do anyway, so this gets me a few more minutes to verify that the code works. I can tune the query, alter indexes, perf test, and be sure that code is still running cleanly.


Querying Yesterday

One of the new features coming in SQL Server 2016 is the temporal table. It’s an interesting concept, one that many businesses have wanted for years. If you’re not sure what this is, we’ve got a collection of resources on Learning about Temporal Tables (and other features). Check them out and we’ll keep adding content as we find it.

Temporal tables give us some amazing capabilities, but at a cost. As with anything in computers, there is a cost for capability. In this case, we can look back at the view of our data as of a particular point in time. In many ways, this means that we don’t need to bolt on, or query into, auditing data.

However there are other costs. As with any auditing system, we potentially have substantial data that we need to manage somehow. Certainly we need to choose which tables to track. Even if we don’t have to build a process, we will have to deal with the cost of storage and provisioning, as well as determining the retention periods. We also need to really depend on the system times for our various instances to be in sync.

I think that this is a needed, and very useful feature. I’m sure there will be bugs to patch, as well as enhancements to be built. We’ll find those over time, but I think that this is one of those features that we’ll come to see as essential in a decade and wonder how we ever built systems without it.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.1MB) podcast or subscribe to the feed at iTunes and LibSyn.

What’s a Code Smell?

We all have a variety of code patterns and practices that we follow. Most of them were probably picked up along the path of our career. A suggestion from a colleague. A piece of sample code that solved a problem. A performance tuning trick that stopped our phone from ringing. These methods of learning are the way that most of us actually grow our skills over time.

However just because we learned something, or because a technique solved a problem doesn’t mean it was a good piece of code. In fact, often the code we may think works well might not be the most efficient way to structure the code. Many developers have learned this over the years, as they read about new techniques that are more efficent, elegant, or just simpler.

Kent Beck and Massimo Arnoldi coined the term code smell years ago, as a way of noting the development patterns and practices that  lead to poorly written, or difficult to maintain code. There have been other attempts to document practices which are not recommended, though the success is probably limited as many developers continue to build on poorly written code rather than refactoring and cleaning their codebase over time.

Simple Talk  and Phil Factor published a SQL Code Smells ebook awhile back, trying to document the signs of poorly written T-SQL. The book is good, with guidance about particular patterns that can cause you problems over time. The items aren’t meant to be rules, but rather guidelines that you adhere to unless you have a good, specific reason that you can justify to others.

I ran into a code smell recently where a developer noted that their application depended on a specific database name in order to work.  That’s not in the ebook, but I think it’s easily one I’d avoid. My connection should determine the database, not the application itself. I know there may be exceptions here, but in general, application code shouldn’t be dependent on a particular name.

I’d urge you to pick up the ebook (it’s free) and keep it handy. See if any of the items listed are habits you might have picked up over time and not realized that they are, in fact, poor practices. I would also recommend you peruse Aaron Bertrand’s Bad Habits to Kick series, as a way of improving your own code.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.9MB) podcast or subscribe to the feed at iTunes and LibSyn.

SQL Source Control and Git–Getting Started

It seems as though Git is taking the world by storm as the Version Control System (VCS) of choice. TFS is widely used in the MS world, but Git is growing, Subversion is shrinking, as are most of the other platforms.

As a result, I wanted to do a quick setup using SQL Source Control (SOC) and Git, showing you how this works. SOC supports Git in a few ways, so this is the primary way I’d see most people getting started.


Here’s the scenario that I’ll use. I’ve got a database, WindowDemo, that has a few tables, some data, and a few procs. As you can see below this isn’t linked to a VCS.

2015-09-24 16_34_56-Cortana

I want to store my DDL code in c:\git\WindowDemo\trunk. I’ve got that folder created, but it’s empty. I’ll keep related database stuff (docs, scripts, etc) in c:\git\WindowDemo if I need it.

 2015-09-24 16_37_36-Photos

Git Setup

The first thing you need to do is get your Git repository setup. There are many ways to do this, but I’ll use the command line because I like doing that. The commands in the various client GUIs will be very similar.

I’m going to set the git repository here at c:\windowdemo to keep all my database stuff in one place. To setup the repository, I run a git init in the command prompt. This initializes my repository.

2015-09-24 16_41_02-Photos

Now I have a git VCS, I need to get code in there.

SQL Source Control Setup

Now I move to SSMS to link my database to the repository. In SSMS, I right click my database and select “link database to source control”.

2015-09-24 16_42_27-Start

This will open the SOC plugin on the setup tab. I’ve filled in the path to the place in the repository I want the code to go. This is the trunk folder. I’ve also selected Git, using the “Custom” selection on the left and Git in the dropdown.

2015-09-24 16_43_54-Link to source control

Once I click the link button, I’ll get a dialog showing progress and then return to the setup tab.

2015-09-24 16_44_15-Start

Notice the balloon near the top. This lets me know the link is active and I have changes in my database that aren’t in the VCS. There’s a pointer to the “Commit changes” tab, so I’ll click that.

2015-09-24 16_48_00-New notification

In the image above, I see I have a number of “new” objects from the perspective of the VCS. I can see the name, and the type of object in the middle. At the bottom, I see the version in my database (highlighted code) on the left and the version in my VCS (blank) on the right.

This is where I commit my changes. I enter a comment at the top and click the “commit” button on the right (not shown). When I do that, I’ll get a clean “commit tab” that shows that my VCS is in sync with my database DDL.

2015-09-24 16_50_18-SQL Source Control - Microsoft SQL Server Management Studio

Inside Git

What’s happened in my VCS? Let’s look in the file system. Here I see my trunk folder.

2015-09-24 16_51_49-Photos

SOC has created a structure for my DDL code and included some meta data. If I look in one of these folders, such as Stored Procedures, I see

2015-09-24 16_58_56-Photos

This is the .SQL code that matches what’s compiled in my database. SOC stores the current CREATE statement for all my objects so that they can easily be examined.

Inside Git, I see a clean status with all my files as committed objects.

2015-09-24 17_02_57-Start

This is what I want. Now I can continue on with database development, tracking all my changes. I’ll look at the flow and tracking changes in another post.

The Work of the Ancients

I was reading a post from someone recently where they noted that they didn’t worry to much about the architecture of the system since it wouldn’t likely last very long. The poster had a comment that many systems are replaced inside of a few years.

In my experience, that’s not usually the case. In fact, while I don’t expect many applications I’ve worked on to last for 25 years, I suspect many of them will exist for ten years or more, especially if they are receiving regular development resources. With that in mind, I wanted to see how your databases are faring these days. I suspect a database might last longer than a particular application, as it seems most organizations are loathe to ever let data go.

What’s the age of your oldest, regularly used database?

I’m looking for an age in years. If the answer is less than one, I’m not sure I’d call that old at all. I am sure many of your  systems are older, and might have changed, but let us know the year when the system went into production.

I can tell you the SQLServerCentral systems are old in some ways, not so old in others. We’ve grown from one database to three over the years. The oldest database is circa 2003. Some of the data from that one was migrated to other databases around 2007. We’ve got data in the system since 2001, but we’ve certainly changed structures and storage over the years.

I’d guess that most of you that are working in companies that are older than ten years will have a database that’s at least that old. However let us know this week, and if you have any interesting notes, feel free to share them.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.4MB) podcast or subscribe to the feed at iTunes and LibSyn.

Standing Desk Update

It’s been a long time since I wrote about my workspace, however I made a change recently. This was the "short term test" situation I set up a couple years ago:

Photo Aug 24, 9 14 19 AM

After just using boxes for a few days, I decided I liked things and got monitor stands. However I left my boxes in place for the keyboard and mouse. Mostly because I wanted a few months of playing with heights.

I actually did experiment, adding and removing books a few times, but mostly I let things languish. I wanted to build my own keyboard stand, but kept finding excuses. I lived with the setup, getting annoyed with no good place to place a mug of coffee or a piece of paper. Precariously balancing my laptop on the books at times.

A few weeks ago I saw some friends on Facebook mention they were getting standing desks. I’d seen Brent Ozar’s desk, but had no desire to spent that kind of $$. I wasn’t even sure I wanted an adjustable one as when I sit, I usually go to a table or couch with my laptop.

Someone recommended a simple desk, which I liked, but wasn’t sure I wanted to ask my boss for $700 for a hand crank desk. However I did spend a few minutes shopping when my daughter asked for a desk for her room. She got a normal desk from Ikea as she’s starting high school and wanted a workspace.

I saw a $400 electric adjustable one, and was tempted, but decided to go for a $40 upgrade for mine. I got a small table, and added it.

Photo Aug 24, 10 43 42 AM

This one isn’t perfect as it’s a touch high. I did add a couple more foam mats and raised myself to a good level, but I’m not sure I love this.

I do, however, like the extra shelf and space for putting a couple mugs down. I usually have two (coffee and water), so this is handy. I also have space for a few pieces of paper if I need to set them down.

I’m not sure if this is a good move. For now I want to leave things alone as I’m not in a hurry and want to be sure I would use an up/down desk. I certainly have some tasks that work better sitting down, like webinars, so I am tempted to get a chair and work here at times.

The one good thing I have going for me is Redgate is good about ergonomics and ensuring a good workspace. I am tempted by the treadmill desks, but I think I’d just as soon just walk away from the desk if I need a break.

Have a Think

I’m used to working at a chaotic pace. I have lots of diverse projects and deliverables that constantly force me to change my focus. I have short term and long term projects. I work with diverse technologies. My workload is a mix of development and administrative tasks. It’s a hectic schedule that gets worse when I travel since SQLServerCentral must continue to run.

Years ago I was struggling with daily deliverables, the bimonthly SQL Server Standard magazine, and a couple book projects a year. It was maddening, especially when my business partners also wanted me to experiment with ways to grow our business. I wondered, how would I ever find time to dream up new ideas or implement a proof of concept.

Andy Warren helped me slow down by scheduling “thinking time.” He told me it was important, and he had to make some at his job. Carve out an hour once a week or so and just spend time thinking about a project. Don’t do; just think.

I have kept that advice in mind over the years, and I try to take some time to just think about the tasks I need to accomplish. I don’t worry about actually getting anything done, but instead try to just think about the best way to move forward. I don’t always succeed, but I continue to try to make time and just think.

Whether you run a business or just write code for one, I think the art of stopping and thinking is a bit lost. So few of us actually stop to spend some time planning. Instead we’re very quick to start writing code or trying out a configuration change. While I am a proponent of experimenting, I still think it’s worth taking a few minutes and just consider different ways to experiment. Often we can choose better experiments if we have a plan.

Try it this week. Take ten or fifteen minutes and think about the next task rather than diving right in. You might find it to be a more valuable tool than just experimenting.

Steve Jones

The Voice of the DBA Podcast

Listen to the MP3 Audio ( 2.7MB) podcast or subscribe to the feed at iTunes and LibSyn.


Get every new post delivered to your Inbox.

Join 5,481 other followers